While deep neural networks (DNNs) have demonstrated great proficiency in diverse tasks spanning various domains, the reliability of their predictions remains a subject of ongoing research. In the context of classification problems, there is a common misconception regarding probabilities generated by DNNs, falsely equating them with the confidence of the models in their assigned classes. Incorporating the softmax layer at the end of the network compels models to convert activations to probabilistic values between 0 and 1, irrespective of the underlying activation values. When activations are insufficient for accurate decision-making, raising uncertainty about the correct classification, a model should quantify its uncertainty about the true classification of the input data rather than making uncertain decisions. In this light, this study proposes a distance-based evidential deep learning (d-EDL) classifier with an additive capability for uncertainty quantification (UQ). The d-EDL classifier comprises two key components: the first utilizes convolutional neural network (CNN) layers for feature extraction, while the second incorporates designed layers for decision-making. In the second component, the first layer calculates basic probability assignments (BPAs) from the extracted feature vectors using a distance metric, measuring proximity between an input pattern and selected data representatives. A clustering algorithm is employed to form representatives for each data label; the closeness to a label representative reflects the potential belonging of the input to that label. The second and third layers employ combination rules to merge BPAs, leveraging probability theory and Dempster-Shafer (D-S) theory. The output of the d-EDL network is a probability distribution extended to include uncertainty as a class. An end-to-end training method is provided to train the proposed classifier, enabling joint learning and updating all network parameters. Five variants of the d-EDL classifier, each with a different number of data representatives, are trained on an image dataset, and their uncertainty quantification ability is assessed. The assessment involves evaluating the models in three scenarios, each with a common misclassification leading factor: noise, image rotation, and out-of-distribution (OOD) data. The results demonstrate the excellent capability of d-EDLs, especially those with 20 and 40 data representatives, to effectively quantify uncertainty rather than misclassification when faced with unfamiliar data.