Rethinking Performance Methodology in ‘Objective’ Sport

Introduction to the MeenaMethod

When it comes to swimming, or any Metric Sport for that matter, the point scoring methodology should be derived directly from performance, and not indirectly from placement. And a benchmark, for example a NCAA or world record, should establish the scale for which the performances are measured.

The reason for this is because the performances of Metric Sports are measured objectively (i.e. in meters, seconds, or kilograms according to the ISU), and thus are unbiased.  In other words, the results are universally regarded as accurate.

Therefore, to maintain the same level of unbiased universal accuracy for scoring, the points should be directly correlated to the metric result, and not give uncorrelated favor to any performance over another. It is this level of accuracy and objectivity, that the MeenaMethod attempts to preserve.

The MeenaMethod is a framework for relatively scoring the performances of Metric Sports(1).  The premise is that since Metric Sports are measured with independent variables, then relative performance points(2), under certain defined conditions, can be objectively calculated.

As you are about to read, compared with the requirements of the MeenaMethod, the existing performance point scoring methodologies for competitive swimming are not objective, and, ultimately, not fair.

Let’s dive in…


THE PROBLEM

Note: unless otherwise stated, this article will focus on swimming as the Metric Sport example, and the scoring methodologies of the NCAA and FINA. Additionally, all times or performances referenced are as of the date of this publication.

The problem is the existing point scoring options for Metric Sports, which is measured objectively, utilize subjective methods for scoring.

At the moment, there is not a point scoring methodology ascribed to a Metric Sport that utilizes benchmarks to set a relative scale of evenly distributed points. Instead, all point scoring methodologies allow objective performances to dictate placement, but then subjectively ascribe a point value that is deemed “fair” even though they are not evenly and/or directly correlated to the performance.

Said differently, a truly fair scoring methodology does not exist, because the current methodologies used by the NCAA and FINA are subjective. And while I understand the subjectivity might be inspired by giving a “boost” to the winner, a fair framework, and any subsequent adjustments for rewarding placement, should be objective throughout.


GOVERNING BODY EXAMPLES


NCAA Swimming

For NCAA swimming, there are quite a few options of scoring depending on the meet (e.g. dual, tri, quad, invitation) and lanes (e.g. <5, 6+, <8, 9+), and you can read more about them here if you want.  In summary, though, the main issue with the NCAA swimming methodology is:


FINA

If you would like to read more about the FINA Performance Point calculation, you can do so here.  However, in summary, FINA points are calculated as:

Points = 1000 * (Benchmark / Time Tested) ^3

So, while FINA does use a benchmark (e.g. the world record)(4), and the only way to break through the scale (a la get over 1000 points) is to break and reset the benchmark, or go faster, there are still issues with the methodology that make it unfair and / or inaccurate.  They are:


EXHIBITS


Exhibit 1 (Part 1)
NCAA Swimming
vs. MeenaMethod Point Comparison

Scenario: Comparing the Men’s 2018 NCAA Swimming D1 Championship team results using the NCAA Championship scoring methodology vs. the MeenaMethod scoring methodology.

  • NCAA Scoring: according to:

    • Rule 7, Section 8, Article 4: “when 16 competitors qualify for the finals of a championships meet, the scoring of place values shall be: relays, 40-34-32-30-28-26-24-22-18-14-12-10-8-6-4-2; individual events, 20-17-16-15-14-13-12-11-9-7-6-5-4-3-2-1”

    • Rule 7, Section 10. Ties: “in the case of ties within an event, the points involved shall be equally divided among the tied competitors”

  • NCAA Observations:

    • Methodology is based off placement only

    • Assumes, no matter the outcome, 1st place deserves 18% more points than 2nd place, and 15th place deserves 100% more points than 16th place

    • Relay points are indirectly correlated with the individual events by being worth double the points but requiring quadruple the participants

  • Result: this scoring methodology guided Texas to their fourth consecutive title in 2018 with 449.0 points, and California, Berkeley placed second for the fourth consecutive year with 437.5 points


Exhibit 1 (Part 2)
NCAA Swimming vs. MeenaMethod Point Comparison

Scenario: Comparing the Men’s 2018 NCAA Swimming D1 Championship team results using the NCAA Championship scoring methodology vs. the MeenaMethod scoring methodology.

  • MeenaMethod Observations

    • Points are awarded relative to an agreed upon benchmark – in this case, the NCAA D1 record in each event

    • All performance points are tied directly to the benchmark, so the integrity of placement is still maintained, and the fastest person/team still wins

    • Rather than ascribing an arbitrary point value to each performance, performances are scored based on their placement on the NCAA Record scale with individual, relay, and diving events all worth the same amount of points(5)

  • Result: without changing the placement of a single event, the MeenaMethod scoring methodology awards the most points to California, Berkeley with 3092, compared to Texas of 2982, because, when compared to the best of the best in the NCAA, California, Berkeley was relatively the fastest team in the water in 2018


Exhibit 2
Scale: Static vs Dynamic

Scenario: If the Male LCM 100 Freestyle world record of 46.91 seconds were to be tied in January, and then broken each month by 0.10% (or roughly 0.04 – 0.05 seconds), so reset 11 times in total, a:

  • Static Scale (FINA Methodology) benchmarks each of the 11 performance to the 46.91 record since that was the benchmark on January 1.

    • Result = even though, relatively, each performance is equally faster than the previous reset, the performance in December will be worth the most points

  • Dynamic Scale (MeenaMethod) benchmarks each performance to the previous world record, so in this case it would reset 11 different times in 2019

    • Result = since the record was broken by 0.10% each time, the point values remain the same for every performance


Exhibit 3
Slope: Non-Linear vs Linear

Scenario: Using the Male LCM 100 Freestyle world record of 46.91 seconds as the benchmark, a +/- 1.00% variance (~0.47 seconds), should produce the same variation in points no matter where it falls on the scale:

  • FINA’s cubic curve approach produces a non-linear slope that does not treat all hundredths equal and gives favor/bias, in regard to points, to performances as they get closer to the benchmark

    • Result = as performances get faster (i.e. better), each hundredth closer towards the benchmark gains more in points than the last hundredth – in this case, as the times get faster the slope increases to 4.0% and decreases to 3.7% as the times get slower

  • The MeenaMethod’s scale of equal distribution approach produces a linear slope that awards each performance fairly and evenly

    • Result = a performance that is 5.00% slower than the benchmark is, in fact, 5x the delta from a performance that is 1.00% slower – or in this case, 50 less points, compared with 10, and a 0 slope


Exhibit 4
Significant Digits

Scenario: Using the Male LCM 100 Freestyle world record of 46.91 seconds as the benchmark, calculating the points for the seven performances from 47.02 through 47.08 seconds should produce seven different values:

  • FINA accepts a minimum of 3 significant digits in its methodology, which is the result of a 1000-point scale without a decimal structure, but that is not enough to separate performances measured in a minimum of 4 significant digits

    • Result = only four separate point values are ascribed to the seven performances, so while some are faster, they do not achieve more points

  • The MeenaMethod utilizes a 100-point scale with a decimal structure(6) so the minimum of significant digits is 4, allowing for a more precise score

    • Result = rewards seven different point values to each of the seven performances


CONCLUSION


Closing Thoughts: A Note from the Author

Hi.  My name is Elliot Meena and I am the “creative director” of, you guessed it, the MeenaMethod.

First off, thank you.  If you are reading this (I think) it means you have read the article in its entirety, and I appreciate that.  I hope some of what I am trying to present makes sense.

Additionally, I invite an open dialogue.  Critics, skeptics, Metric Sport nerds alike, please come back with questions and comments at info@meenamethod.com

And lastly, in closing, I will leave you with some thoughts in the form of the footnotes…

Footnotes:

  1. I purposely did not include any MeenaMethod math in this article because I wanted to keep it conceptual – my hope is that by looking at the data you can see, relatively, the current methodologies don’t appear fair

    • This article was just an introduction and an attempt to highlight the subjectivity of scoring methodologies in activities that are objectively measured – the framework is also not limited to performance points, as relative multiples can be derived when other variables are introduced

  2. For the purposes of this article, all performance points referenced are unadjusted – which means that nothing favorable is applied to the equation that would give performances more points than deserved, such as a first-place boost

    • adjustments (to be explained later) are a way to add gamification to the MeenaMethod framework while still maintaining objectivity

  3. There really isn’t anything special about 3.  I just inserted this row so nobody was confused with the cubic curve 3 in the FINA points calculation

  4. There are two statements I know I will hear in regards to record breaking.  So let me respond to both preemptively:

    • The MeenaMethod agrees with the statement “not all records are created equal”

      • A faster record-breaking performance can earn less points than previous under the MeenaMethod

      • For example, Caeleb Dressel earned 100.25 points for his 39.90 SCY 100 Freestyle NCAA breaking performance at the 2018 D1 NCAA Championships, but he earned 101.14 points for his 40.00 SCY 100 Freestyle NCAA breaking performance at the 2017 D1 NCAA Championships

    • The MeenaMethod disagrees with the statement “not all records are considered equal”

      • All benchmarks (measured under the same conditions) must be considered relatively equal (or far, or fast, or heavy) as each other – just as one must assume the capital markets operate efficiently, the MeenaMethod does not allow for arbitrage

      • If you want to know the true impact of a record, you should look at the points achieved when the record was set

        • For example, Caeleb Dressel’s SCY 100 Butterfly NCAA Record may only be worth 100.00 points now, but it achieved 101.79 points when it was set

  5. Diving, while not a Metric Sport, is still included in the NCAA analysis because:

    • It actually doesn’t make a difference and California would still beat Texas, and

    • Anything can technically be scored via the MeenaMethod if there is an associated benchmark, and diving still maintains an NCAA record of points for the 1,3, and 10-Meter competitions

  6. A scale of 100.00 points is relatable to participants as a familiar grading scale

    • For example, if you want to place Top 16 at NCAA D1 Championships, using the Men’s 2018 data, you need to score, on average across the 13 individual events, 93.93 points in your event – or said differently, you better be a solid A swimmer


Author: Elliot Meena

Published: November 4, 2018

Source: FINA.org, NCAA.org