Suggested |
---|

27 Jan 2018, v0.0.3 (Beta) (CHS) |

Description |

There seems to be a lot of wasted calculation when reducing grid size. If I run a job where there is a small range of possible troops allowed, the grid calculations (x64x, x32x, x16x, etc) all seem to repeat the same calculations for the hi and lo range of troops numbers, at each resolution. Finally, as resolution drops below the give range, there are additional calculations for the middle of the allowed range.
I am guessing this is how it works by looking at the “progress” column. There is enormous potential for speed up by saving calculations at points in the grid which will be repeated in the next smaller resolution. |

Solution |

Your observation is absolutely right. A pre-v0.0.1 version of Centurion used to memorise previously calculated attacks in a job to prevent re-calculation but that ran into memory trouble. Centurion already uses up very much memory with optimisations of two and more waves. That’s why I removed the memorisation at the cost of calculation time.
But it is worth thinking about a smarter way of doing this; the initial approach was quite naive and may be improved. I’ll see if I can get something feasible running for v0.1. |

Scheduled |

v0.0.4 (Beta): Caching of results from previous grids, slight acceleration. |

There is one change that could be implemented very quickly. Pick the largest range in troop numbers; let that determine the size of the first grid.

Sometimes I run with a small number of iterations and a large range of troops; then having identified a likely region for the solution, then I run again with more iterations, and about +/- 8 troops around the optimal found. The starting grid on the second job should then be x16x. It’s a small saving, but could be implemented quickly.

Memory troubles can be addressed by using some form of hashing. You keep results in a hash table, which can be as big as you need. If you can’t find a result in the hash table, then recalculate. Scope here for extensible and dynamic hash schemes, which would grow to fit the available memory for any given user. Scope also to have a very large dynamic table kept in the file system, that could be saved between jobs, and paged in and out for a given job.

My background (now decades old, but I can remember some of it) is data structures and algorithms. I can probably manage some of that still… but I’d be hopeless with the interface stuff. I used to write my own data structure routines, but there are often good libraries with sophisticated hash table management you could use.

LikeLike

Another quick fix… the optimum attack for VictoryXpEff can have a general with the maximum number of troops. You may be able to get a saving by removing one of the troop types from the search grid, and quietly filling up the general with that troop before simulation. This should, I suspect, roughly halve computation times.

LikeLike

Maximum number of troops: You are right, it will bring down calculation time by certain amount not having to vary the figure for the last troop. But there are also some negative side effects:

* First waves should probably not be filled up. This will increase the losses unnecessarily.

* When only lets say 100BSS are needed but Centurion fills up to 200 because that’s the capacity, the player might think he/she really needs 200BSS. If 200BSS aren’t available, he/she might discard the solution or start producing these extra BSS even though they aren’t really needed. I consider a solution with minimal troops superior to a solution that attacks with more than necessary.

* What are the objectives where this applies? You mention VictoryXpEff and you’re right there. But there are others where it applies too and there are some where it would do a real harm (Lock).

As a solution I am picturing one checkbox (toggle) per wave whether or not to fill up the garrison for this wave to its capacity. That way the user can decide for each general individually. I am not really clear about whether this checkbox should be checked or unchecked by default… since it might depend upon the circumstances. Does the checkbox state correlate somehow with the objective and the number of the wave? I want to be careful here because the user interface might get overloaded and overcomplicated quickly by things like that…

🙂

Anyway, nice thinking… we’re still in the process of it, no pressure. I won’t implement a rush thing that we find the need to roll back later .)

LikeLike

I’d like to add my proposal here for public discussion, may be someone has further improvements?

My idea is to significantly reduce the number of unneccessary iterations by calculating “worst case” and “best case” beforehand.

The 1st calculation for each fight combination would be where all our troops hit with the smaller value while the enemy always hits with the high one. Now we know the highest possible losses we will have for this particular case.

The 2nd calculation would be our troops always having luck and the enemy doesn’t hit. So we have the lowest possible losses.

As a side effect, we also know the shortest and longest possible duration (important for [b]locks).

We can now compare these values to the best we’ve had before in regards of the goal (Max XP, Min. Losses, longest duration, whatever).

In case our actual best result is worse than the best we calculated before AND the actual worst result is better than the former worst one, we can skip ALL further iterations for this fight as it cannot be better or worse than any combination before!

Let’s assume you optimize an attack with only 1 troop type, where the 1st enemy will be down when attacked with 190 troops.

The current way would run 1000×1, 1000×64, 1000×128, 1000×192 = 4000 calculations to get the first difference.

The possibly improved would do 2×1 (we find the current min/max and because the values are identical we don’t need any further iterations), 2×64, 2×128 (until here we use only more troops for no gain), 2 or 1000×192 (depends the goal) = 1004 or by chance only 6 calculations!

LikeLike