I examine the sensitivity of scoring rules for distribution forecasts in two dimensions: sensitivity to linear rescaling of the data and the influence of measurement error on the forecast evaluation outcome. First, I show that all commonly used scoring rules for distribution forecasts are robust to rescaling the data. Second, it is revealed that the forecast ranking based on the continuous ranked probability score is less sensitive to gross measurement error than the ranking based on the log score. The theoretical results are complemented by a simulation study aligned with frequently revised quarterly US GDP growth data and an empirical application forecasting realized variances of S&P 100 constituents.