Any machine learning software driven by data involving sensitive information should be carefully designed so that primary concerns related to human values and rights are sufficiently protected. Privacy and fairness are two of the most prominent concerns in machine learning attracting academia and industry. However, they are often addressed independently, although they both heavily rely on data distribution around sensitive personal attributes. Such a split approach cannot adequately attend to the reality in that the assessments made alongside either privacy or fairness could complicate the decision of the other. As a starting point, we present an empirical study to characterize differential privacy’s utility and fairness impact on inferential systems. In particular, we investigate whether various differential privacy techniques applied to different parts of the machine learning pipeline pose any risks to group fairness. Our analysis reveals the convoluted picture of the interplay between privacy and fairness, particularly on the sensitivity of commonly used fairness metrics and shows that unfair outcomes are sometimes an unavoidable side effect of differential privacy. We further discuss essential factors to consider when designing a fairness evaluation on differentially private systems.