After some tests of the community, we found that many instances of monsters on a map will cause a lot of memory usage. As I found out, the root of the cause was how we do the pathfinding for the monsters. Because of that, I took this as a challenge to optimize memory and cpu usage.

Reducing memory usage (1)

The major flaw was using one instance of a pathfinder per monster. Every instance of a pathfinder had one instance of a "GridNetwork", which had an array of 65536 possible nodes. These nodes were created by demand, but kept in memory to be reused, until the server closed.

So, the first thing I did was pooling these pathfinder objects per map. A handful of them could easily handle all monsters of a map, because a monster would only search for a path every few seconds, and only if a player is around. This has probably already solved the intial issue, but I wanted more.

Reducing CPU usage

I observed that the preparing of the path finder between each request was pretty inefficient. It still had to reset all the nodes - up to 65k as mentioned above. Additionally, searching for a path in some maps could take longer than expected. Take the Lost Tower map as example. There you have a lot of walls, where the monster would see a player in their range, but could never find a path in a reasonable number of steps. For example, the limit for the walking network packet is 16 steps. The line of sight distance limit of a walk target (MonsterDefinition.ViewRange), is much lower than that (at most: 10).

So, I had the idea of limiting the search of the path to a small segment of the map. To accomplish that, I implemented a "ScopedGridNetwork", which would use a dynamically sized segment, of 8x8 or 16x16 coordinates. To define the segment, we calculate the point in the middle of the start and end coordinates and then check if both coordinates fit into a 8x8 or 16x16 segment.

The smaller segment means less work to do: First, we don't have to prepare up to 65k nodes anymore. Second, we are not dedicating a lot of computation to hopeless requests anymore - we fail much earlier.

Reducing memory usage (2)

So, because we're now using map segments, we can reduce the size of the node array in the network to at most 256 (16x16) elements, instead of 65k.

Now, because the CPU usage was greatly reduced, why not handle all monsters of all maps with a handful of pooled Pathfinding objects? Well, I implemented this as well, of course :-)

Summary

Thanks to this changes, we have now a very efficient pathfinding implementation in OpenMU. To give you an idea: One pathfinding request takes an average of 0.025 ms on my machine (Ryzen 9 5900X). That means, it can handle about 40k pathfinding requests per second, per cpu core.