In live sporting events, varied lighting and weather conditions combined with display and camera gamut differences prevent the application of traditional approaches for managing the representation of brand colors on screen. To overcome these limitations, we use machine learning to identify and color correct specific pixels in every video frame without affecting non-brand colors while also automatically adjusting to changes in lighting, weather conditions, and camera angle. The resulting algorithm, named ColorNet, is trained on thousands of paired examples of color-correct and color-incorrect frames pulled from broadcast footage of Clemson football games. The first iteration of ColorNet (ColorNet 1.0) successfully performed real-time color correction for Clemson University’s orange (RGB 245 102 0) at 60 frames per second (fps). The next challenge was to expand the original ColorNet model to simultaneously correct multiple brand colors without sacrificing speed and accuracy. The model architecture was modified in order to accomodate the more complex task of correcting two brand colors at once. The resulting algorithm (ColorNet 1.5) performs real-time color correction for two brand colors simultaneously and in real time. Concerns arose during this phase of development that compression artifacts might be affecting model training and performance. This led to a series of tests comparing model performance when using mp4 compressed video vs mxf uncompressed in the training process. Ultimately, these tests showed that mp4 compression does impact model performance, but the impact on the final visual output of the model does not justify the difficulties of using the larger uncompressed video and image files for training. This presentation will share the results of the expanded algorithm and the compression format tests.