Abstract:
Reinforcement learning has emerged as a promising approach for enhancing profitability in cryptocurrency trading. However, the inherent volatility of the market, especially during bearish periods, poses significant challenges in this domain. Existing literature addresses this issue through the adoption of single-agent techniques such as deep Q-network (DQN), advantage actor-critic (A2C), and proximal policy optimization (PPO), or their ensembles. Despite these efforts, the mechanisms employed to mitigate losses during bearish market conditions within the cryptocurrency context lack robustness. Consequently, the performance of reinforcement learning methods for cryptocurrency trading remains constrained within the current literature. To overcome this limitation, we present a novel cryptocurrency trading method, leveraging multi-agent proximal policy optimization (MAPPO). Our approach incorporates a collaborative multi-agent scheme and a local-global reward function to optimize both individual and collective agent performance. Employing a multi-objective optimization technique and a multi-scale continuous loss (MSCL) reward, we train the agents using a progressive penalty mechanism to prevent consecutive losses of portfolio value. In evaluating our method, we compare it against multiple baselines, revealing superior cumulative returns compared to baseline methods. Notably, the strength of our method is further exemplified through the results obtained from the bearish test set, where only our approach demonstrates the ability to yield a profit. Specifically, our method achieves an impressive cumulative return of 2.36%, while the baseline methods result in negative cumulative returns. In comparison to FinRL-Ensemble, a reinforcement learning-based method, our approach exhibits a remarkable 46.05% greater cumulative return in the bullish test set.