Oftentimes, I feel the sting of regret seeing the returns I could have made had I jumped on recent investing bandwagons. Bubbles and financial markets have always co-existed but network effects and increased accessibility have amplified their magnitudes in ways that one could argue have further displaced any sense of coherant investment logic (if there ever was any). In many ways, it is those who were not married to certain ideas about what constituted appropriate investing, who benefited the most from these asymmetrical return profiles. That is not to say that many individuals have not equivalently lost a lot of money dabbling in the dark arts - my dad just as many others in the new millenium got caught up investing in the latest hot tech stocks during the dotcom bubble and was sufficiently marred by the experience that he has never invested in the stock market since.
I have plotted below some of the recent financial asset bubbles and their maximal returns when measured from peak values.
import yfinance as yf
import matplotlib.pyplot as plt
import numpy as np
assets = {
"Dot-Com Bubble": "AAPL",
"Chinese Stock Market Bubble": "ASHR",
"Oil Price Bubble": "USO",
"Housing Bubble": "IYR",
"Tech Stock Surge": "XLK",
"Cryptocurrency Bubble": "BTC-USD",
"Cannabis Stock Bubble": "MJ",
"GameStop Short Squeeze": "GME",
"Tesla Stock Surge": "TSLA",
"Meme Stock Phenomenon": "AMC",
}
def download_price_data(ticker, start_date, end_date):
data = yf.download(ticker, start=start_date, end=end_date, progress=False)
return data['Adj Close']
fig, axes = plt.subplots(nrows=len(assets)//2, ncols=2, figsize=(15, 3*len(assets)//2))
fig.tight_layout(pad=5.0, h_pad=10.0)
for (bubble, ticker), ax in zip(assets.items(), axes.flatten()):
try:
if "Dot-Com Bubble" in bubble:
bubble_start_date = '1999-01-01'
bubble_end_date = '2002-01-01'
elif "Chinese Stock Market Bubble" in bubble:
bubble_start_date = '2013-01-01'
bubble_end_date = '2016-01-01'
elif "Oil Price Bubble" in bubble:
bubble_start_date = '2007-01-01'
bubble_end_date = '2009-01-01'
elif "Housing Bubble" in bubble:
bubble_start_date = '2003-01-01'
bubble_end_date = '2009-01-01'
elif "Tech Stock Surge" in bubble:
bubble_start_date = '2020-01-01'
bubble_end_date = '2024-01-01'
elif "Cryptocurrency Bubble" in bubble:
bubble_start_date = '2017-01-01'
bubble_end_date = '2018-06-01'
elif "Cannabis Stock Bubble" in bubble:
bubble_start_date = '2017-01-01'
bubble_end_date = '2019-01-01'
elif "GameStop Short Squeeze" in bubble:
bubble_start_date = '2020-06-01'
bubble_end_date = '2021-06-01'
elif "Tesla Stock Surge" in bubble:
bubble_start_date = '2020-01-01'
bubble_end_date = '2021-01-01'
elif "Meme Stock Phenomenon" in bubble:
bubble_start_date = '2021-01-01'
bubble_end_date = '2022-01-01'
prices = download_price_data(ticker, bubble_start_date, bubble_end_date)
bubble_start_date = prices.index[0]
bubble_end_date = prices.index[-1]
maxReturn = max(prices)/prices[0]-1
prices.plot(ax=ax, label=f"{ticker}\nrMax: {maxReturn*100:.0f}%")
ax.set_title(bubble)
ax.set_xlabel('Date')
ax.set_ylabel('Adjusted Close Price')
ax.legend()
except Exception as e:
print(f"Error fetching data for {bubble}: {e}")
plt.show()
Is the answer really just to ignore as some of these ridiculous opportunities go by? If you are late to the party, maybe. I would like to propose another approach - what if one could detect these manic opportunities once they are still in their early stages and apply a simple trend indicator to enter/exit positions. In the below backtests, I have used a positive return signal with a volatility targeting overlay to limit risk exposure. Although the results are reasonably positive, there is an important look-ahead bias in so much that I would not have known that these were the markets to be monitoring at that point in time. The only way to realistically capture these would have been if I was either (a) presciently surveying trending assets on a discretionary basis (b) sampling a very large universe of tradable assets. The latter approach would start to ressemble the architecture of a traditional CTA strategy and may in fact explain their positive skew profile. To elaborate on this point, such strategies generally bleed returns until big underlying movements allow them to pick up trends systematically and offset periods of range-bound markets.
The issue once again is that it is unlikely one should know which assets would trend and so the only way to ensure their capture would be to place multiple small bets on a large sample of assets (and perhaps scale up these positions accordingly). To demonstrate this, I have built two types of momentum portfolios - one which is built on equities and another which uses futures.
def backtest(prices):
returns = prices.pct_change()
volatility = returns.ewm(span=63).std()
signals = np.where(returns.shift(2).rolling(window=126).sum() > 0, 1, 0)
positions = signals * (1 / volatility.shift(2))
positions = np.where(positions > 100, 100, positions) / 100
cumulative_returns = 100 * (1 + positions * returns).cumprod()
return cumulative_returns
fig, axes = plt.subplots(nrows=len(assets)//2, ncols=2, figsize=(15, 3*len(assets)//2))
fig.tight_layout(pad=5.0, h_pad=5.0)
for (bubble, ticker), ax in zip(assets.items(), axes.flatten()):
try:
if "Dot-Com Bubble" in bubble:
bubble_start_date = '1999-01-01'
bubble_end_date = '2002-01-01'
elif "Chinese Stock Market Bubble" in bubble:
bubble_start_date = '2013-01-01'
bubble_end_date = '2016-01-01'
elif "Oil Price Bubble" in bubble:
bubble_start_date = '2007-01-01'
bubble_end_date = '2009-01-01'
elif "Housing Bubble" in bubble:
bubble_start_date = '2003-01-01'
bubble_end_date = '2009-01-01'
elif "Tech Stock Surge" in bubble:
bubble_start_date = '2020-01-01'
bubble_end_date = '2024-01-01'
elif "Cryptocurrency Bubble" in bubble:
bubble_start_date = '2017-01-01'
bubble_end_date = '2018-06-01'
elif "Cannabis Stock Bubble" in bubble:
bubble_start_date = '2017-01-01'
bubble_end_date = '2019-01-01'
elif "GameStop Short Squeeze" in bubble:
bubble_start_date = '2020-06-01'
bubble_end_date = '2021-06-01'
elif "Tesla Stock Surge" in bubble:
bubble_start_date = '2020-01-01'
bubble_end_date = '2021-01-01'
elif "Meme Stock Phenomenon" in bubble:
bubble_start_date = '2021-01-01'
bubble_end_date = '2022-01-01'
prices = download_price_data(ticker, bubble_start_date, bubble_end_date)
cumulative_returns = backtest(prices)
ax.plot(cumulative_returns.index, cumulative_returns, color='green', label='Cumulative Returns')
ax.set_title(f"{bubble}")
ax.set_xlabel('Date')
ax.set_ylabel('Performance')
annualized_return = (cumulative_returns.iloc[-1] / 100) ** (252 / len(cumulative_returns.index)) - 1
annualized_volatility = cumulative_returns.pct_change().std() * np.sqrt(252)
sharpe_ratio = annualized_return / annualized_volatility
stats_text = f'Return (ann.): {annualized_return:.2%}\nVolatility (ann.): {annualized_volatility:.2%}\nSharpe Ratio: {sharpe_ratio: .2f}'
ax.text(0.95, 0.05, stats_text, transform=ax.transAxes, fontsize=10, verticalalignment='bottom', horizontalalignment='right', bbox=dict(facecolor='white', edgecolor='white', boxstyle='round,pad=0.5'))
except Exception as e:
print(f"Error fetching data for {bubble}: {e}")
plt.show()
The first approach I have used to build a momentum strategy is based on individual time-series (TSMOM). As opposed to comparing the trend relative to peers, it is evaluated on its own absolute strength. For example, the below strategy is an equal risk-weighted basket of futures across different asset classes where signals may be either positive (buy) or negative (sell). The filter used to evaluate momentum is not prescriptive - some of the popular indicators I have seen used include linear regression slope, sharpe ratio, SMA and so on. Despite some differences here and there, I have found that the choice of indicator is not hugely impactful (drastically different results would more likely be an indication of out-of-sample sensitivity). For that reason, I prefer to keep things simple and have used a simple rolling return indicator to assess the top trending assets. Positions have also been adjusted for volatility as was done in the previous example.
import yfinance as yf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import date, datetime
# Function to get historical data from Yahoo Finance
def get_historical_data(symbol, start_date, end_date):
data = yf.download(symbol, start=start_date, end=end_date, progress=False)
return data
# Calculate 6-month momentum signal
def calculate_momentum_signal(data, lookback_period=126):
data['Returns'] = data['Close'].pct_change()
data['Momentum_Signal'] = np.where(data['Returns'].rolling(lookback_period).sum() > 0, 1, -1)
return data
# Implement Emma vol target at 10%
def apply_volatility_target(data, target_volatility=0.1):
data['DailyVol'] = data['Returns'].rolling(window=63).std() * np.sqrt(252) # Adjust window size as needed
data['Adjusted_Position'] = data['Momentum_Signal'] * (target_volatility / data['DailyVol'])
return data
# Backtest each individual futures strategy
def backtest_strategy(data):
data['Position'] = data['Adjusted_Position'].shift(2) # Shift to avoid lookahead bias
data['Strategy_Returns'] = data['Position'] * data['Returns']
data['Cumulative_Strategy_Returns'] = (1 + data['Strategy_Returns']).cumprod()
return data
# Plot the results for each individual strategy
def plot_individual_strategy(data, symbol, ax):
ax.plot(data['Cumulative_Strategy_Returns'], label=f'{symbol} - Strategy Returns', linewidth=2)
ax.plot((1 + data['Returns']).cumprod(), label=f'{symbol} - Benchmark Returns', linestyle='--')
ax.legend()
ax.set_title(f'{symbol}')
ax.set_xlabel('Date')
ax.set_ylabel('Cumulative Returns')
# Backtest an equally-weighted basket of individual futures strategies
def aggregate_strategies(strategies):
common_index = strategies[0].index
strategies = [strategy.reindex(common_index) for strategy in strategies]
aggregate_returns = sum(strategy['Strategy_Returns'] for strategy in strategies)
aggregate_returns /= len(strategies)
cash_returns = (yf.download('^IRX', start_date, end_date, progress=False)['Adj Close']/25200).reindex(aggregate_returns.index)
cumulative_returns = (1 + aggregate_returns + cash_returns).cumprod()
return cumulative_returns
# Main execution
if __name__ == '__main__':
# Define the list of symbols for various futures contracts
symbols = ['ES=F', 'NQ=F', 'YM=F', # Equity index futures
'6E=F', '6J=F', '6B=F', '6A=F', # Currency futures
'ZT=F', 'ZF=F', 'ZN=F', 'ZB=F', # Bond futures
'GC=F', 'SI=F', 'HG=F', 'CL=F'] # Commodity futures
# Define the time period
start_date = '2002-01-01'
end_date = date.today()
all_strategies = []
num_strategies = len(symbols)
num_rows = int(np.ceil(num_strategies / 3))
num_cols = 3
# Create subplots for individual strategies
fig, axes = plt.subplots(nrows=num_rows, ncols=num_cols, figsize=(15, 3 * num_rows))
# Loop through symbols and perform the backtest
for i, symbol in enumerate(symbols):
historical_data = get_historical_data(symbol, start_date, end_date)
historical_data = calculate_momentum_signal(historical_data)
historical_data = apply_volatility_target(historical_data, target_volatility=0.1)
historical_data = backtest_strategy(historical_data)
all_strategies.append(historical_data[['Strategy_Returns', 'Returns']])
plot_individual_strategy(historical_data, symbol, axes[i // num_cols, i % num_cols])
plt.tight_layout()
plt.show()
# Aggregate and equally weight the strategies
aggregate_returns = aggregate_strategies(all_strategies)
annualized_return = aggregate_returns.iloc[-1] ** (252 / len(aggregate_returns.dropna().index)) - 1
annualized_volatility = aggregate_returns.pct_change().std() * np.sqrt(252)
sharpe_ratio = annualized_return / annualized_volatility
stats_text = f'Return (ann.): {annualized_return:.2%}\nVolatility (ann.): {annualized_volatility:.2%}\nSharpe Ratio: {sharpe_ratio: .2f}'
# Plot the results for the aggregate strategy
plt.figure(figsize=(10, 6))
plt.plot(aggregate_returns, label='Aggregate Strategy Returns', linewidth=2)
plt.text(0.02, 0.95, stats_text, transform=plt.gca().transAxes, fontsize=10, verticalalignment='top', bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
plt.title('Time-Series Momentum Strategy Backtest')
plt.xlabel('Date')
plt.ylabel('Cumulative Returns')
plt.show()
A cross-sectional momentum (XMOM) strategy invests in the top performing assets within a selection pool. In this particular example, I have applied the methodology to equity sector ETFs. Apart from sheer laziness, the main reason I have decided to do this is to minimize the risk of survivorship bias. If I were to use any static basket universe of stocks, I run the risk of not selecting stocks which have been delisted or have defaulted which would be important determinants of my backtest results. This is less of an issue for more sophisticated financial time-series datasets but I aim to keep this analysis general and accessible. In any case, the strategy reviews the top trending sectors within the S&P500 and rotates on a weekly basis (if there are any signal updates). I have also kept the strategy dollar-neutral by shorting an equivalent notional of futures since I wanted to evaluate the strategy's merit after excluding the market risk premium.
import pandas as pd
import numpy as np
import yfinance as yf
import matplotlib.pyplot as plt
from datetime import date, datetime
def downloadData(tickers, start_date, end_date):
data = yf.download(tickers, start=start_date, end=end_date, progress=False)['Adj Close']
return data
def calculate_returns(data):
return data.pct_change()
def calculate_cross_sectional_momentum(priceData, lookback):
return priceData.pct_change(lookback).shift(2)
def select_top_n_sectors(momentum_values, n):
signalsUpdate = pd.Series(0, index=momentum_values.index)
signalsUpdate.iloc[::5] = 1
selected_values = momentum_values.apply(lambda x: x.nlargest(n).index, axis=1).where(signalsUpdate==1).ffill()
return selected_values
def equal_weight_portfolio(returns, selected_tickers):
selected_values = pd.DataFrame(0, index=returns.index, columns=returns.columns)
for date, cols in selected_tickers.items():
selected_values.loc[date, cols] = returns.loc[date, cols]
portfolio_returns = selected_values.sum(axis=1) / (selected_values!=0).sum(axis=1)
return portfolio_returns
def backtest_strategy(start_date, end_date):
sector_tickers = ['XLY', 'XLC', 'XLE', 'XLF', 'XLV', 'XLI', 'XLB', 'XLRE', 'XLK']
data = downloadData(sector_tickers, start_date, end_date)
returns = calculate_returns(data)
momentum_values = calculate_cross_sectional_momentum(data, 126)
top_sectors = select_top_n_sectors(momentum_values, 5)
portfolio_returns = equal_weight_portfolio(returns, top_sectors)
benchmarkReturns = calculate_returns(downloadData('ES=F', start_date, end_date))
cumulative_returns = (1 + portfolio_returns - benchmarkReturns).cumprod()
total_return = (1 + portfolio_returns).cumprod()
#benchmarkCumReturns = (1 + benchmarkReturns).cumprod()
annualized_return = cumulative_returns.iloc[-1] ** (252 / len(cumulative_returns.dropna().index)) - 1
annualized_volatility = cumulative_returns.pct_change().std() * np.sqrt(252)
sharpe_ratio = annualized_return / annualized_volatility
stats_text = f'Return (ann.): {annualized_return:.2%}\nVolatility (ann.): {annualized_volatility:.2%}\nSharpe Ratio: {sharpe_ratio: .2f}'
plt.figure(figsize=(10, 6))
plt.plot(cumulative_returns, label='Portfolio')
#plt.plot(benchmarkCumReturns, label='S&P 500')
plt.title('Cross-Sectional Momentum Strategy')
plt.xlabel('Date')
plt.ylabel('Cumulative Returns')
plt.text(0.02, 0.95, stats_text, transform=plt.gca().transAxes, fontsize=10, verticalalignment='top', bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
#plt.legend()
plt.show()
return total_return
if __name__ == "__main__":
start_date = "2000-01-01"
end_date = date.today()
xMomStrategy = backtest_strategy(start_date, end_date)
Combining the two strategies results in an improved risk-adjusted performance. I have taken out the hedge from XMOM since the positive skew profile of TSMOM would nicely offset the negative skew we typically see in pure equity beta (much of the benefit of combining the two strategies lies in this effect more than the sector rotation itself). The strategy may be improved in a number of ways but I believe this has more to do with the smaller details, rather than some well-guarded secrets. This is by no means an exhaustive list of considerations but at least off the top of my head, I would investigate the following:
To conclude, I have highlighted two momentum-based approaches to capture trending markets (some of which may include bubbles). Although these are some of the most basic and well-known strategies having been deployed by CTAs for decades, there is something to be said about their long-term efficacy. Mean reversion strategies get sucked dry particularly quickly and involve a space race to better technology, data and market access that may be beyond the reach of the majority of hobbyist investors. In any case, financial markets fortunately trend on a longer-term basis and therefore allow relatively simple systematic strategies to join in for the ride.
import seaborn as sns
xMomStrategy.name = 'XMOM'
aggregate_returns.name = 'TSMOM'
df = pd.merge(xMomStrategy, aggregate_returns, left_index=True, right_index=True)
df = df.pct_change().dropna()
df['Combined'] = 0.2 * df['XMOM'] + 0.8 * df['TSMOM']
cumReturn = (1+df['Combined']).cumprod()
annualized_return = cumReturn.iloc[-1] ** (252 / len(cumReturn.dropna().index)) - 1
annualized_volatility = cumReturn.pct_change().std() * np.sqrt(252)
sharpe_ratio = annualized_return / annualized_volatility
stats_text = f'Return (ann.): {annualized_return:.2%}\nVolatility (ann.): {annualized_volatility:.2%}\nSharpe Ratio: {sharpe_ratio: .2f}'
# Plot the cumulative return
fig, axs = plt.subplots(1, 2, figsize=(12, 4))
axs[0].plot(cumReturn)
axs[0].text(0.02, 0.95, stats_text, transform=axs[0].transAxes, fontsize=10, verticalalignment='top', bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
axs[0].set_title('Historical Performance')
# Plot the histogram
axs[1] = sns.histplot(df, ax=axs[1])
axs[1].set_xlim(-0.05, 0.05)
stats_text = str(df.skew())
axs[1].text(0.02, 0.95, stats_text, transform=axs[1].transAxes, fontsize=10, verticalalignment='top', bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
axs[1].set_title('Daily Returns Distribution')
plt.tight_layout()
plt.show()