An end-to-end automatic cloud database tuning system using deep reinforcement learning

J Zhang, Y Liu, K Zhou, G Li, Z Xiao, B Cheng… - Proceedings of the …, 2019 - dl.acm.org
J Zhang, Y Liu, K Zhou, G Li, Z Xiao, B Cheng, J Xing, Y Wang, T Cheng, L Liu, M Ran, Z Li
Proceedings of the 2019 international conference on management of data, 2019dl.acm.org
Configuration tuning is vital to optimize the performance of database management system
(DBMS). It becomes more tedious and urgent for cloud databases (CDB) due to the diverse
database instances and query workloads, which make the database administrator (DBA)
incompetent. Although there are some studies on automatic DBMS configuration tuning, they
have several limitations. Firstly, they adopt a pipelined learning model but cannot optimize
the overall performance in an end-to-end manner. Secondly, they rely on large-scale high …
Configuration tuning is vital to optimize the performance of database management system (DBMS). It becomes more tedious and urgent for cloud databases (CDB) due to the diverse database instances and query workloads, which make the database administrator (DBA) incompetent. Although there are some studies on automatic DBMS configuration tuning, they have several limitations. Firstly, they adopt a pipelined learning model but cannot optimize the overall performance in an end-to-end manner. Secondly, they rely on large-scale high-quality training samples which are hard to obtain. Thirdly, there are a large number of knobs that are in continuous space and have unseen dependencies, and they cannot recommend reasonable configurations in such high-dimensional continuous space. Lastly, in cloud environment, they can hardly cope with the changes of hardware configurations and workloads, and have poor adaptability. To address these challenges, we design an end-to-end automatic CDB tuning system, CDBTune, using deep reinforcement learning (RL). CDBTune utilizes the deep deterministic policy gradient method to find the optimal configurations in high-dimensional continuous space. CDBTune adopts a try-and-error strategy to learn knob settings with a limited number of samples to accomplish the initial training, which alleviates the difficulty of collecting massive high-quality samples. CDBTune adopts the reward-feedback mechanism in RL instead of traditional regression, which enables end-to-end learning and accelerates the convergence speed of our model and improves efficiency of online tuning. We conducted extensive experiments under 6 different workloads on real cloud databases to demonstrate the superiority of CDBTune. Experimental results showed that CDBTune had a good adaptability and significantly outperformed the state-of-the-art tuning tools and DBA experts.
ACM Digital Library
Showing the best result for this search. See all results