This paper proposes a novel generation adversarial network (GAN) based on UNet++ architecture for infrared and visible image fusion. The idea of this method is to establish an adversarial game between the generator that generates the fusion image and the discriminator that determines whether the fusion image meets the standard. The generator uses the structure of UNet++, which does not have a deep network structure but establishes a dense connection at the shallow layer, so it has strong ability to obtain shallow features. As for the discriminator, it adopts a network structure similar to the Visual Geometry Group (VGG) network. The loss function uses the method of comparing the high-frequency part of the fused image with the two source images to make the fused image have more high-frequency information. In terms of high-frequency detail extraction of source images, two gradient extraction methods based on different combinations of directional extraction operators and high-frequency extraction are proposed in this paper. This paper compares the two methods above with other fusion networks, and proves that the fusion image generated by our network is highly similar to the infrared image, and has a lot of gradient information of the visible image.