While it is apparent that the transfer of knowledge between tasks is beneficial for training efficiency, the application of trained deep reinforcement learning agents to solve new tasks is not trivial. Especially when tasks are differently structured, retraining and fine tuning is not necessarily beneficial. Instead, it is often the most convenient approach to train a new agent from scratch. One potential solution for effectively reusing learned knowledge may be found in hierarchical reinforcement learning. In this paper we investigate the possibility of reusing low-level policies to improve training efficiency when learning manipulation tasks with an industrial robot. We consider four different scenarios and demonstrate for three of them an increased sample efficiency when training a high-level policy on top of pre-trained low-level skills. In the fourth scenario we uncover the reason for a failed transfer to be an ambitious higher hierarchy level enforcing a relearning of the low-level skills.