Removes synchronization from sending packets
Makes normal packet sends no longer need to be wrapped and queued like it use to work.
Adds more packet queue immunities on top of keep alive to let the following scenarios go out
without delay:
- Keep Alive
- Chat
- Kick
- All of the packets during the Player Joined World event
Hoping that latter one helps join timeout issues more too for slow connections.
Removes processing packet queue off of main thread
- for the few cases where it is allowed, order is not necessary nor
should it even be happening concurrently in first place (handshaking/login/status)
Ensures packets sent asynchronously are dispatched on main thread
This helps ensure safety for ProtocolLib as packet listeners
are commonly accessing world state. This will allow you to schedule
a packet to be sent async, but itll be dispatched sync for packet
listeners to process.
This should solve some deadlock risks
This may provide a decent performance improvement because thread synchronization incurs a cache reset
so by avoiding ever entering a synchronized block, we get to avoid that, and packet sending is a really
hot activity.
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
b999860d SPIGOT-2304: Add LootGenerateEvent
CraftBukkit Changes:
77fd87e4 SPIGOT-2304: Implement LootGenerateEvent
a1a705ee SPIGOT-5566: Doused campfires & fires should call EntityChangeBlockEvent
41712edd SPIGOT-5707: PersistentDataHolder not Persistent on API dropped Item
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes#3220
We still keep vanilla process of waiting for existing session to be removed before logging in
by storing a separate map of pending.
also fire the callback using executor incase further recursion causes any trouble
Appending to the tail of the chunk tasks leaves a
window for the chunk to be moved to a
non-ticking status.
Additionally, use CB's callback executor so we
can ensure that we are not incorrectly
scheduling.
See: https://gist.github.com/aikar/dd22bbd2a3d78a2fd3d92e95e9f28dc6
as part of post processing a chunk, we can call ChunkConverter.
ChunkConverter then kicks off major physics updates, and when blocks
that have connections across chunk boundries occur, a recursive risk
can occur where A updates a block that triggers a physics request.
That physics request may trigger a chunk request, that then enqueues
a task into the Mailbox ChunkTaskQueueSorter.
If anything requests that same chunk that is in the middle of conversion,
it's mailbox queue is going to be held up, so the subsequent chunk request
will be unable to proceed.
We delay post processing of Chunk.A() 1 "pass" by re stuffing it back into
the executor so that the mailbox ChunkQueue is now considered empty.
This successfully fixed a reoccurring and highly reproduceable crash
for heightmaps.