5
Life Arts

# Spike's Deadliest Warrior: Are the winners true victors?

By       (Page 2 of 2 pages) Become a premium member to see this article and all articles as one long page.

As you know, if we can already be certain about the outcome of a given system, we wouldn't have to conduct a simulation study in the first place. Thus, simulation is used to give us an idea about how a system (in this case, the warriors) would behave when we don't know all the characteristics of the input variables that influence the simulation outcome. The simulation output always provides a mean value and the variance associated with this estimated value. The variance and the standard deviation (the square-root of the variance) give the measure of uncertainty.

Even with the 100 X-factors, the simulation output should provide the variance associated with the estimate. In truth, these input factors should be random variables (with a mean and variance) in order to properly simulate the outcome of the battles (using a Monte Carlo simulation protocol*). According to what I've seen on the show, these X-factors are subjective values provided by the hosts and other experts. This means that the simulation output could be biased (as suggested by the fact that American fighters have only lost the simulated battles once in three seasons).

Given the lack of information about the uncertainty or variability associated with the simulation process, we can still use an approximation method (described in the next paragraph) to examine whether the differences between the percentages is statistically significant. In other words, we'll be able to determine with great confidence whether or not the Zombies kick the Vampires to the curb. I guess we'll see if what I foresee will happen this coming September.

In statistics, we can use the t-statistic test to compare whether two percentages are statistically different. A t-statistic above 1.96 combined with a p-value below 0.05 (or 5%) show that the two percentages are different (at an acceptable level of uncertainty). Note that we can use different values for the t-statistic to measure different levels of uncertainty, but I'll leave that for another day.

So, what does this tell us about the Season 3 episodes so far?

Accounting for the uncertainty (probably under-estimated for the test I'm using), we get the following results when we compare the percentages:

George Washington vs. Napoleon Bonaparte
t-statistic=1.20, p-value=0.23

Joan of Arc vs. William the Conqueror
t-statistic=3.48, p-value=0.0005

U.S. Army Rangers vs. North Korean Special Operation Force
t-statistic=0.16, p-value=0.88

Thus, based on the results above, only Joan of Arc appears to be a clear winner. For the other two, no one can claim to be Deadliest. Oops.

Maybe the combatants are too evenly matched. On the other hand, if the results are too different, the viewers might complain that the outcome is so obvious that there is no need to have them 'fight to the death'.

Uncertainty, after all, makes things more interesting. Especially if you're into stats.

An example of the carnage often seen on The Deadliest Warrior

*For a true estimate of the variability, the 5,000 simulation runs should be performed say 100 or 1,000 times with different input values. Then, we summarize all the percentages in order to get the mean (average) and the variance.

Thanks to Taste is Sweet for her input.

This article was cross-posted at Open Salon.

Dominique Lord Social Media Pages:

Dominique is an Associate Professor in the Zachry Department of Civil Engineering at Texas A&M University. His research work aims at reducing the negative effects associated with motor vehicle crashes. When he's not developing mathematical and (more...)

The views expressed herein are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.
 Contact Author Contact Editor View Authors' Articles

OpEdNews depends upon can't survive without your help.