Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes#3220
We still keep vanilla process of waiting for existing session to be removed before logging in
by storing a separate map of pending.
also fire the callback using executor incase further recursion causes any trouble
This will help prevent many cases of unregistering entities during entity ticking
Currently delays Chunk Unloads and Async Chunk load callbacks
Also dropped mid ticking chunk tasks during entity ticking to reduce this risk
Forgot to flip the pending boolean back to false, causing it to copy
empty data on the next tick if nothing else triggered a load.
haven't managed to actually reproduce the crash others got, but did
verify that the bad copy was occurring erasing the data.
also fixed a bug with chunk load callback not executing before
another one was scheduled.
Credit to Spotted for the idea
A lot of the new chunk system requires constant back and forth the main thread
to handle priority scheduling and ensuring conflicting tasks do not run at the
same time.
The issue is, these queues are only checked at either:
A) Sync Chunk Loads
B) End of Tick while sleeping
This results in generating chunks sitting waiting for a full tick to
complete before it will even start the next unit of work to do.
Additionally, this also delays loading of chunks until this same timing.
We will now periodically poll the chunk task queues throughout the tick,
looking for work to do.
We do this in a fair method that considers all worlds, not just the one being
ticked, so that each world can get 1 task procesed each before the next pass.
We also cap the throughput of these task processes to 1 per world per 0.1ms or
200 max per tick, to ensure that high volume of tasks do not overload the current
tick time.
In a view distance of 15, chunk loading performance was visually faster on the client.
Flying at high speed in spectator mode was able to keep up with chunk loading (as long as they are already generated)