May 7, 2011 TestMy.net published the article Why Do My Results Differ From Speedtest.net / Ookla Speed Tests? with a reference to wiki.ookla.com/test_flow (nowadays gone, but available at the Wayback Machine: wiki.ookla.com/test_flow).
At this page Ookla stated:
- Small binary files are downloaded from the web server to the client to estimate the connection speed
- Based on this result, one of several file sizes is selected to use for the real download test
- The test is performed with cache prevention via random strings appended to each download
- Up to 8 parallel HTTP threads (configurable) can be used for the test
- Throughput samples are received at up to 30 times per second
- These samples are then aggregated into 20 slices (each being 5% of the samples)
- The fastest 10% and slowest 30% of the slices are then discarded (see * below for more detail)
- The remaining slices are averaged together to determine the final result
* Since we are measuring data transported over HTTP via Flash there is potential protocol overhead, buffering due to the many layers between our application and the raw data transfer and throughput bursting due primarily to CPU usage. This accounts largely for dropping the top 10% and bottom 10% of the samples. We also keep our default test length short for the user experience, and compared to this duration the ramp-up period is fairly significant driving us to eliminate another 20% of the bottom result samples.
In other words Ookla used to measure (100 - 10 =) 90% of your internet speed.
Nowadays, Ookla still measures 90% of your internet speed. This is true for all mainstream speedtests like Cloudflare. Cloudflare writes on their about page: (...) Speed is measured by downloading/uploading progressively larger files and taking the 90th percentile speed (...).