Helping students to understand the quality of their programs is a difficult task hampered by the time instructors have for grading. When the number of programs to grade are in the hundreds, instructors may be able to handle dynamic analysis of the programs and possibly a cursory glance at the code itself. Automated solutions may appear attractive, but few exist in the literature. Further, not enough examples exist to help instructors choose what metrics would be useful for helping students to visualize how they program. In this study, a collection of static metrics data obtained with Verilog Logiscope is correlated to an estimate of program quality to determine which metrics would show students at least the instructor's idea of quality. The study results are encouraging and show that definite correlations exist so that static analysis is a viable methodology for assessing student work. Further work is considered to help to confirm the study's results and their practical application.