PET-CT scans using 18F-FDG are increasingly used to detect cancer, but interpretation can be challenging due to non-specific uptake and complex anatomical structures nearby. To aide this process, we investigate the potential of automated detection of lesions in 18F-FDG scans using deep learning tools. A 5-layer convolutional neural network (CNN) with 2x2 kernels, rectified linear unit (ReLU) activations and two dense layers was trained to detect cancerous lesions in 2D axial image segments from PET scans. Pre-contoured scans from a retrospective cohort study of 480 oesophageal cancer patients were split 80:10:10 into training, validation and test sets. These were then used to generate a total of ~14000 45×45 pixel image segments, where tumor present segments were centered on the marked lesion, and tumor absent segments were randomly located outside the marked lesion. ROC curves generated from the training and validation datasets produced an average AUC of ~<95%.