Object detection from infrared-band (thermal) imagery has been a challenging problem for many years. With the advent of deep Convolutional Neural Networks (CNN), the automated detection and classification of objects of interest within the scene has become popularised due to the notable increases in performance over earlier approaches in the field. These advances in CNN approaches are underpinned by the availability of large-scale, annotated image datasets that are typically available for visible-band (RGB) imagery. By contrast, there is a lack of prior work that specifically targets object detection in infrared-band images, owing to limited datasets availability that stems from more the limited availability and access to infrared-band imagery and associated hardware in general. A viable solution to this problem is transfer learning which can enable the use of such CNN techniques within infrared-band (thermal) imagery, by leveraging prior training on visible-band (RGB) image datasets, and then subsequently only requiring a secondary, smaller volume of infrared-band (thermal) imagery for CNN model fine-tuning. This is performed by adopting an existing pre-trained CNN, pre-optimized for generalized object recognition in visible-band (RGB) imagery, and subsequently fine-tuning the resultant model weights towards our specific infrared-band (thermal) imagery domain task. We use of two state-of-art object detectors, Single Shot Detector (SSD) with a VGG-16 CNN backbone pre-trained on the ImageNet dataset, and You-Only-Look-Once (YOLOV3) with a DarkNet-53 CNN backbone pretrained on the MS-COCO dataset to illustrate our visible-band to infrared band transfer learning paradigm. Exemplar results reported over the FLIR Thermal and MultispectralFIR benchmark datasets show that significant improvements in mAP detection performance to f0.804MsFIR, 0.710FLIRg for SSD and f0.520MsFIR, 0.308FLIRg for YOLOV3 via the use of transfer learning from initial visible-band based CNN training.