Satellite imagery contains valuable large-scale information for precision farming. However, the low-resolution of satellite images can make it challenging to extract crop status information due to mixed pixels, in particular within multi-species crop stands like grass-clover for silage. In contrast, proximal high-resolution images with centimeter to sub-millimeter scale contain cures for single crop species pixels. However, these are often sparsely sampled due to computational limitations. In this paper, we present a preliminary attempt to enrich multispectral satellite images with crop stand population intelligence extracted from sparsely proximal RGB samples. The system attempts to reinforce satellite imagery based on proximal indicators lowering the risk of faulty interpretation knowledge base for future farm management information system (FMIS). A semantic segmentation algorithm is utilized to find the ratio of grass, clover, and soil across proximal images. Sentinel-2 as satellite imagery is employed as the 10-meter ground sampling distance input of the system and the grass, clover, and soil ratios are the output gained simultaneously. The system includes 1) a method where the proximal images and satellite imagery are preprocessed and then aligned with each other; and 2) a non-linear Multi-Layer Perceptron (MLP) extracting grass, clover, and soil ratio. Estimation results present promising correlation between clover, grass, soil, and Sentinel-2. Although, more data with higher diversity of clover-grass mixture is required to confirm the distinction of clover and grass.