Mark chunks that are blocking main thread for world generation as urgent
Implements a general priority system so that chunks that are sorted in
the generator queues can prioritize certain chunks over another.
Urgent chunks will jump to the front of the line, ensuring that a
sync chunk load on an ungenerated chunk does not lag the server for
a long period of time if the servers generator queues are filled with
lots of chunks already.
This massively reduces the lag spikes from sync chunk gens.
Then we further prioritize loading order so nearby chunks have higher
priority than distant chunks, reducing the pressure a high no tick
view distance holds on you.
Chunks in front of the player have higher priority, to help with
fast traveling players keep up with their movement.
This commit also improves single core cpu scenarios in that we will
now automatically disable Async Chunks as well as Minecrafts thread
pool.
It is never recommended to use async chunks on a single CPU as context
switching will be slower than just running it all on main.
This also bumps the number of server worker threads by default too.
Mojang does not utilize the workers in an effecient manner, resulting
in them using barely any sustained CPU.
So give it more workers so more chunks can be processed concurrently
This change also improves urgent chunk loading, so players flying into
unloaded chunks will hurt a little bit less (but still hurt)
Ping #3395#3363 (Not marking as closed, we need to make prevent moving work)
The expected version should be equal to or newer than the one stored.
Although Aikar claims he did this on accident (and NOT my ligatures!), I
claim this is all a big conspiracy by followers of the Taco cult.
When crossing certain chunk boundaries, the client needlessly
calculates light maps for chunk neighbours. In some specific map
configurations, these calculations cause a 500ms+ freeze on the Client.
This patch basically serves as a workaround by sending light maps
to the client, so that it doesn't attempt to calculate them.
This mitigates the frametime impact to a minimum (but it's still there).
This adds the server property `Paper.DisableClassPrioritization` to disable
prioritization of own classes for plugins' classloaders.
This value is by default not present, and this will therefore break any
plugins which abuse behaviour related to not using their own classes
while still loading their own. This is often an issue with failing to
relocate or shade properly, such as when shading plugin APIs like Vault.
A plugin's classloader will first look in the same jar as it is loading
in for a requested class, then load it. It does not re-use other
plugins' classes if it has the chance to avoid doing so.
If a class is not found in the same jar as it is loading for and it does
find it elsewhere, it will still choose the class elsewhere. This is
intended behaviour, as it will only prioritise classes it has in its own
jar, no other plugins' classes will be prioritised in any other order
than the one they were registered in.
The patch in general terms just loads the class in the plugin's jar
before it starts looking elsewhere for it.
Massively reduces memory allocation of 2048 byte buffers by using
an object pool for these.
Uses lots of advanced new capabilities of the Paper codebase :)
Targets 3072 * 8 buffers per 1GB of heap memory up to a max consideration
of 6GB of heap (any more over 6GB won't give more nibble pool)
You can control the 3072 number by setting -DPaper.nibbleBucketSize=2048
Remember this number is * 8 then * heap memory in GB
That is 98304 objects for 4GB of memory, at 2064 bytes roughly, meaning 194MB
You may also control max number of pooled objects directly instead of any
dynamic calculation using -DPaper.maxNibblePoolSize=1024000
While this will use more old generation by a tad bit, allocation rate will drop
significantly, causing less young generation GC's.
This commit has gone through extensive testing for over a day and confident
it no longer has any issues with light corruption.
This commit doesn't do much on its own, but adds a new Java Cleaner API
that lets us hook into Garbage Collector events to reclaim pooled objects and
return them to the pool.
Adds framework for Network Packets to know when a packet has finished dispatching
to get an idea when a packet is done sending to players.
Rewrites PooledObjects impl to properly respect max pool size and remove
almost all risk of contention.
Bumps the Paper Async Task Queue to use 2 threads, and properly shuts it down on shutdown.
Use a proper teleport for teleporting to entities in different
worlds.
Validate that the target entity is valid and deny spectate
requests from frozen players.
Also, make sure the entity is spawned to the client before
sending the camera packet. If the entity isn't spawned clientside
when it receives the camera packet, then the client will not
spectate the target entity.
This fixes exploits that let players destroy bedrock by Pistons, explosions
and Mushrooom/Tree generation.
These blocks are designed to not be broken except by creative players/commands.
So protect them from a multitude of methods of destroying them.
A config is provided if you rather let players use these exploits, and let
them destroy the worlds End Portals and get on top of the nether easy.
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
ffc8e4ca SPIGOT-5716: Clarify documentation of MultipleFacing
CraftBukkit Changes:
d07a78b1 SPIGOT-5716: Clarify documentation of MultipleFacing
46a13860 SPIGOT-5718: Block.BreakBlockNaturally does not reflect tool used
214ffea9 SPIGOT-5727: GameRule doImmediateRespawn cannot be set per-world
Spigot Changes:
2f5d615f SPIGOT-5730: Modernise inventory patch
a2bdb119 SPIGOT-5679: Add config option for end portal activation sound
Closes#3352
I swear the crap that stuff will abuse to make stuff happen is insane.
Hash codes apparently changing behavior of stuff based on its value, so
reverting e9fcee1190Fixes#3346Fixes#3341
I utilized the IDE to convert streams to non streams code, so shouldn't
be any risk of behavior change. Only did minor optimization of the
generated code set to remove unnecessary things.
I expect us to just drop this patch on next major update and re-apply
it with the IDE again and re-apply the collections optimization.
Optimize collection by creating a list instead of a set of the key and value.
This lets us get faster foreach iteration, as well as avoids map lookups on
the values when needed.
Removed streams from hoppers and also fixed a mistake in the logic.
When this patch was ported to 1.14/1.15, a line of code was put in
the wrong place which disabled a significant portion of the improvement.
Replaced usages of streams in isEmpty and itemstack checks
Replaced usage of streams in pulling loop
Replaced usage of streams in Lootable Inventory isEmpty() check
Only check for refilling Lootable Inventory when accessing first slot, not all
All of these in general were pretty significant hits, so this single commit
is going to cause tacos to magically appear in front of you every day.
🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮🌮
Nom Nom Nom
If you hate taco's, you're not allowed to use this improvement.
Also ignore the renames, pulled a lot of PR's.
Server.reload() had this logic to give time for tasks to shutdown,
however shutdown did not...
Adds a 5 second grace period for any async tasks to finish and warns
if any are still running after that delay just as reload does.
Closes#3337
If anything used setPositionRaw, it left potential for an AABB
to be left stale at their old location, which could cause massive
AABB boxes if movement ever then got called on the new position.
This guarantees any time we set the entities position, we also
update their AABB.
We store a reference to the chunk the entity is currently in, so use it
to more accurately unregister it in chunkCheck
Should maybe fix some entity loss issues.
Obscure detail in that if you teleport right on a chunk line, it
adds +1 to your collision check and will check the unloaded neighbor.
but the call to load the chunk then returned null if it was pending unload, such
as the load we did in Player List
However we want gen=true for players here anyways, so use getType
This also cleans up the implementation of Async Chunks to get rid of most
Consumer callbacks and instead return futures.
This lets us propogate errors correctly up the future chain
(barring one isn't lost even deeper in the chain...)
So exceptions can now bubble to plugins using getChunkAtAsync
While there is more down the collision system, remove some of the wrapping
Spliterator stuff as even this wrapper stream has shown up in profiling.
With other collision optimizations, we might also even avoid inner streams too.
This patch replaces the vanilla collision code for both block and entity collisions with faster implementations by JellySquid, used originally in her Lithium mod.
Optimizes Full Block voxel collisions, and removes streams from Entity collisions
Original code by JellySquid, licensed under GNU Lesser General Public License v3.0
you can find the original code on https://github.com/jellysquid3/lithium-fabric/tree/1.15.x/fabric (Yarn mappings)
Ported by
Co-authored-by: Zoutelande <54509836+Zoutelande@users.noreply.github.com>
Touched up by Aikar to keep previous paper optimizations
The collision code takes an AABB and generates a cuboid of checks rather
than a cylinder, so at high velocity this can generate a lot of chunk checks.
Treat an unloaded chunk as a collision for entities, and also for players if
the "prevent moving into unloaded chunks" setting is enabled.
If that setting is not enabled, collisions will be ignored for players, since
movement will load only the chunk the player enters anyways and avoids loading
massive amounts of surrounding chunks due to large AABB lookups.
Fixes#3321
This was using SIGNIFICANT amounts of memory allocating many
long[]'s for BitSets for every ProtoChunk in the cache that had
been unloaded and reloaded.
This will result in a nice memory reduction.
Actually showed up in profiling as decent time spent here...
Noticed y/z was missing its final that it use to have, when x had it. some how
must of got messed up on some update. though people suggest this shouldn't of
mattered anyways, but lets put it back for safety.
Added cache of hashcode, as well as optimized the hash code using larger primes.
Also stored the long value of the x/y/z so that for equals we can compare a single long,
as well as have that long value cached for .asLong()