The automatic segmentation of medical images plays an important role in diagnosis and therapy. Deep convolutional neural networks (CNN) represent the state of the art, but have limitations, particularly on the plausibility of the generated segmentations. Our hypothesis is that the improvement of segmentations will come from the addition of external information, via medical knowledge for example, and auxiliary tasks, such as registration, which will guide and constrain the segmentation. On the other hand, the uninterpretable nature of CNN hinders their use in the medical field. If there are explicability methods for classification, everything remains to be done for segmentation. We will aim to develop such methods, in order to understand the mechanisms underlying the addition of knowledge and tasks. Although our developments will be generic, we will target use cases to demonstrate the impact of the results on clinical practice.