This thesis presents a novel Weakly Supervised Mask Data Distillation technique, WeSuperMaDD, to generate Pseudo Segmentation Labels (PSLs) to automate the process of creating image segmentation labels typically created by human annotators. WeSuperMaDD uniquely generates PSLs using learned image features extracted by Convolutional Neural Networks (CNNs) not trained for image segmentation for datasets that are not diverse. WeSuperMaDD introduces a mask refinement system that automatically searches for the PSL with the fewest foreground pixels that satisfies cost constraints. WeSuperMaDD was validated using scene text detection datasets. WeSuperMaDD was found to produce PSLs with higher segmentation accuracy than existing weakly supervised approaches, and comparable accuracy to the state-of-the-art semi-supervised text PSL generation method without requiring fully labeled data. When PSLs generated by WeSuperMaDD were used to train a CNN for context segmentation, improvements in segmentation accuracy in comparison to a CNN trained using Naive PSLs were found.