Infrared and visible images possess different types of simultaneous information, but there is a correlation between them. Traditional convolution sparse representation fusion considers the individual characteristics of each image but not the correlation between infrared and visible images. This results in insufficient detail retention and low contrast. To overcome these issues, joint convolution sparse coding is introduced, and a novel visible/infrared image fusion method is proposed. First, low-pass decomposition is used to decompose the source image into low- and high-pass components. Subsequently, joint convolutional sparse coding and a “choose-maximum” fusion strategy are used to fuse base layers, and the "absolute-maximum" is used for detail layers. Finally, image reconstruction is performed on the low and highpass components to obtain a final fused image. The proposed method not only avoids patch-based sparse fusion, which can destroy the image’s global structural features, but also fully integrates related information between infrared and visible images. Four groups of typical infrared and visible images are used for fusion experiments to verify the superiority of the proposed algorithm. The experimental results show that the proposed fusion algorithm provides optimal performance in subjective visual effects and objective evaluation indicators. Compared with the fusion method based on convolution sparse representation, three Q-series objective evaluation indicators increased by 3.83%, 5.31%, and 0.48%, respectively.