The ability to quickly identify defective software modules can help expedite development of dependable software. Much empirical research has focused on accurate prediction of defective modules from data of software previously developed under similar environments. While this is useful, time wasted on investigating wrong modules can be critical when dealing with extremely large and complex systems. Is it possible to rank predicted modules in order of their susceptibility to defectiveness? Unfortunately, the likelihood of defectiveness is neither entirely dependent on nor linear to the number of defects in software modules. This paper presents an algorithm for predicting if a newly developed software module is likely to be defective, and rank those predicted to be defective in order of their likelihood. We apply the algorithm to five benchmarked data sets of NASA software application projects. The experiments show highly competitive results to other well-established approaches giving an average of 85.3% accuracy.