Anomaly prediction involves estimating defects in manufacturing products based on the conditions of the processing line, as recorded by sensors. This task is crucial as it can forecast future anomalies, thereby improving product quality and reducing investment costs. Explainable pre-diction provides intelligible explanations for the model’s predictions, enabling manufacturing managers to understand the relationship between sensor information and the resulting anomalies, allowing them to mitigate their occurrence. In this project, we focused on predicting periods of high anomaly counts in a future horizon of 72 hours based on sensor data in the past 24 hours. Due to the success of convolutional kernel-based methods in capturing temporal relationships, we investigated two types of convolutional approaches: ROCKET-EBM, which utilizes a large number of random convolutions, and InceptionTime, which relies on trained con- volutions. However, as these models were not inherently interpretable, we utilized a post-hoc explanation approach called Layer-wise Relevance Propagation (LRP). For ROCKET-EBM, we designed a LRP-based algorithm, while for InceptionTime, LRP was directly applied. In our experiments using real manufacturing data, both models outperformed our baseline models. Further, we evaluated the quality of explanations produced by LRP for both models. For the ROCKET-based model, we observe that LRP is able to identify the crucial features for the model. Due to the complexity of the InceptionTime model, we applied relevance score filtering to eliminate small intermediate relevance scores. We observed that the features identified through this process were sufficient to reproduce the model’s performance.