Background: Dense breast tissue may not only ‘mask’ small, non-calcified cancers but also represents an independent risk factor for the development of breast cancer. Computer-automated breast density quantification (CABD) software tools have been developed for the calculation of volumetric breast density. Objectives: This study sought: (1) to compare observer-based breast density scores, using the fifth edition of the Breast Imaging Reporting and Data System (BI-RADS), with the breast density scores calculated using CABD quantification software tools, (2) to determine interreader variability in breast density scoring between qualified radiologists, between radiologists in training (registrars) and between these two groups and (3) to determine intra-reader reliability in breast density scoring. Methods: A cross-sectional study was performed using the data of 100 patients (200 breasts). Three qualified radiologists and three registrars were asked to review the mammograms in question and to assign a breast density score according to the fifth edition of the Breast Imaging Reporting and Data System (BI-RADS) reporting system. Two readings took place at a minimum of 30 days apart. The percentage agreement between the automated and observer-based scores was calculated and intra-reader and inter-reader reliability values were determined. Results: The study found that there was poor agreement between the breast densities calculated by CABD and the more subjective observer-based BI-RADS density scores. These results further reflect a statistically significant degree of inter-reader and intra-reader variability in the evaluation of breast density. Conclusion: We conclude that the use of automated breast density quantification (i.e. CABD) is a valuable tool for the reduction of variability in breast density ratings.