I haven't yet, but here's how I plan to do it:
const modByRange = [
{r:1, m:0},
{r:2, m:0},
{r:3, m:-1},
{r:5, m:-2},
{r:7, m:-3},
{r:10, m:-4},
{r:15, m:-5},
{r:20, m:-6},
{r:30, m:-7},
{r:50, m:-8},
{r:70, m:-9},
{r:100, m:-10},
{r:150, m:-11},
{r:200, m:-12},
{r:300, m:-13},
{r:500, m:-14},
{r:700, m:-15},
{r:1000, m:-16},
{r:1500, m:-17},
{r:2000, m:-18},
{r:3000, m:-19},
{r:5000, m:-20},
{r:7000, m:-21},
{r:10000, m:-22},
{r:15000, m:-23},
{r:20000, m:-24}
].reverse();
const getModForDistance = (d) => {
const entry = modByRange.find((o)=>o.r<=d );
return (entry && entry.m) || 0;
};
And here's the UnitTests I wrote with Jest to test the function:
describe('modByDistance', ()=>{
it('should return 0 for 0', ()=>{
expect(getModForDistance(0)).toBe(0);
});
it('should return 0 for 1', ()=>{
expect(getModForDistance(1)).toBe(0);
});
it('should return 0 for 2', ()=>{
expect(getModForDistance(2)).toBe(0);
});
it('should return -3 for 9', ()=>{
expect(getModForDistance(9)).toBe(-3);
});
it('should return -4 for 10', ()=>{
expect(getModForDistance(10)).toBe(-4);
});
it('should return -4 for 11', ()=>{
expect(getModForDistance(11)).toBe(-4);
});
it('should return -7 for 42', ()=>{
expect(getModForDistance(42)).toBe(-7);
});
it('should return -24 for 500000', ()=>{
expect(getModForDistance(500000)).toBe(-24);
});
});
PASS Measure/__tests__/modByDistance.test.js
modByDistance
✓ should return 0 for 0 (1ms)
✓ should return 0 for 1
✓ should return 0 for 2
✓ should return -3 for 9
✓ should return -4 for 10
✓ should return -4 for 11 (1ms)
✓ should return -7 for 42
✓ should return -24 for 500000 (1ms)
Test Suites: 1 passed, 1 total
Tests: 7 passed, 7 total
Snapshots: 0 total
Time: 0.208s, estimated 1s
Ran all test suites matching /Measure/i.
Watch Usage: Press w to show more.
Just need to plug it in.
Speaking to efficiency, this is probably "good enough". It's O(n) performance. Note the .reverse() at the bottom of the modByRange definition. Often in Computer Science, picking the right representation can really simplify the algorithms. By reversing, I can reduce this to a simple find operation which evaluates a function on each node of an array and returns the first node for which the function is true. My function just finds the first entry where the range is less than or equal to the distance. Since the ranges are in reversed order, that means it will first compare to 20000, then 15000, etc. If I were to iterate from smallest to largest, I'd have to find the node before the first node where the range is greater than the distance, and account for the weird case where the range is greater than the last node.
For small data sets, this is "good enough" because you'd waste more in overhead on any more complicated solution and not see any benefit. If your dataset is really big, say measured in 1000s of rows of similar data, you might look into the Binary Search Algorithm. It has a worst case performance is O(log n). Basically, you look at the one in the middle, determine if it's right (return it), too big (this becomes your end point), too small (this becomes your begin point) and repeat.