The work presented in this thesis addresses the problem of Out-ofDistribution (OOD) detection in deep learning-based classifiers. The emphasis is to investigate real-world settings, where both the In-distribution (ID) and OOD samples are hard to distinguish due to their high semantic similarities. Despite the significant performance of classification networks, they fail to correctly handle OOD samples. During training, the closed-world assumption is heavily relied upon. It is assumed that models are only forwarded samples that share similar distributions to the training data. In dynamic environments, the assumption is strict and leads to severe degradation in performance. While uncertainty quantification methods are elegant solutions to represent predictive confidence, the presented analysis of said methods illustrates multiple challenges in OOD detection. Motivated by the shortcomings of uncertainty-based detectors, a novel mechanism is proposed to efficiently detect OOD samples. The mechanism leverages features extracted from intermediate layers of a classifier and examines their activations. These activations are distinctive for ID samples and can be utilized to distinguish dissimilar data, even if they are classified to the same label. To verify the performance, the proposed method was first amalgamated with an uncertainty-based classifier and tested to detect OOD samples selected to be similar to the ID samples. The method significantly outperforms the uncertainty threshold-based method for OOD detection. Furthermore, the approach was implemented with common classifiers and has shown improved performance on different common OOD datasets.