Probability

Back Up Next

 

When researchers are designing and testing software they need to test their software against a 'golden standard'.  The 'golden standard' is a standard which has proven to be accurate.

For example, when testing the automated CTC they test it against known results of a CTC that a radiologist has studied.

Correct with respect to (wrt) golden standard = True Positive (TP)

(System found Polyp and it agreed with Radiologist) 

Missed wrt golden standard = False Negative (FN)

(System didn't detect polyp but radiologist did)

Found and does not agree with golden standard = False Positive (FP)

(System detected polyp but no polyp found by radiologist.)

Sensitivity of the design = Probability of finding a positive result given that the condition under consideration is true.

   -> (# TP /# TP + # FN)

False Negative Rate = Compliment of sensitivity. Probability the feature is said not to be there given that it is.

 -> (1 - PTP)

False Positive Rate = Probability the feature is said to be there given that it is not.

 

Specificity (True Negative Rate) = probability of a negative result, given that the condition under consideration is said to be false.  Compliment of the Probability of False Positive.

-> # TN / ( # FP + # TN )

Predicitve Value Positive ( PVP) = Probability feature is there given that it is said to be there.

-> # correctly marked / Total # marked

-> # TP / (# TP + # FP )

Probability is important to researchers when they are testing their software as they can calculate how accurate their system is.