Empirical Information on the Small Size Effect Bias Relative to the False Positive Rejection Error for Benford Test-Screening


  •  Yan Bao    
  •  Chuo-Hsuan Lee    
  •  Frank Heilig    
  •  Edward Lusk    

Abstract

Due to the theoretical work of Hill Benford digital profile testing is now a staple in screening data for forensic investigations and audit examinations. Prior empirical literature indicates that Benford testing when applied to a large Benford Conforming Dataset often produces a bias called the FPE Screening Signal [FPESS] that misleads investigators into believing that the dataset is Non-Conforming in nature. Interestingly, the same FPESS can also be observed when investigators partition large datasets into smaller datasets to address a variety of auditing questions. In this study, we fill the empirical gap in the literature by investigating the sensitivity of the FPESS to partitioned datasets. We randomly selected 16 balance-sheet datasets from: China Stock Market Financial Statements Database™, that tested to be Benford Conforming noted as RBCD. We then explore how partitioning these datasets affects the FPESS by repeated randomly sampling: first 10% of the RBCD and then selecting 250 observations from the RBCD. This created two partitioned groups of 160 datasets each. The Statistical profile observed was: For the RBCD there were no indications of Non-Conformity; for the 10%-Sample there were no overall indications that Extended Procedures would be warranted; and for the 250-Sample there were a number of indications that the dataset was Non-Conforming. This demonstrated clearly that small datasets are indeed likely to create the FPESS. We offer a discussion of these results with implications for audits in the Big-Data context where the audit In-charge would find it necessary to partition the datasets of the client.



This work is licensed under a Creative Commons Attribution 4.0 License.