Skip to content

Fixes to Folia

Folia breaks a lot of mechanics and systems. Canvas aims to try and fix as many as possible. This page contains information regarding what Canvas fixes, and how. These strictly document behavioral fixes and restorations of Vanilla systems, not crash and bug fixes in Folia itself.

  • Fixes the /bossbar command
  • Fixes the /dialog command
  • Fixes the /loot command
  • Fixes the /ride command - Changes merged into Folia 1.21.11
  • Fixes the /rotate command - Changes merged into Folia 1.21.11
  • Fixes the /spectate command
  • Fixes the /spreadplayers command
    • This command was completely rewritten to function primarily asynchronously, scheduling to regions when needing to validate data which assists in keeping this command as performant as possible, as it doesn’t effect the region threads. This prevents thread ownership issues and is much safer for region threading and performance purposes
    • See canvas-server/minecraft-patches/sources/net/minecraft/server/commands/SpreadPlayersCommand.java.patch for the full patch implementation
  • Fixes the /tag commmand
  • Fixes the /tick command
    • This is fixed with help from the rewrite-scheduler patch, which includes a system to change the tick-rate and such.
  • Fixes the /waypoint command
    • More documented changes bellow
  • Fixes the /save-all command
    • This command was redone to complete in an asynchronous fashion, marking all currently ticking regions in all worlds to be fully saved on their next tick, saving all their chunks and players. This prevents thread ownership issues and is much safer for region threading
  • Fixes pitch/yaw in the /tp command being consistently 0 when ran, instead of keeping the teleporting entity X/Y rot
  • Fixes Folia#443
  • Fixes Folia#436
  • Fixes Folia#421
  • Fixes max player count being offset by -1 in Folia
  • Fixes a few technical Vanilla mechanics
  • Fixes even more stupid amounts of issues

While POI update scheduling works fine on Folia, it has a major flaw with how it works. In the region threading base patch, Folia replaces the method call BlockableEventLoop#execute with RegionizedTaskQueue#queueChunkTask. While this works fine in general gameplay, it does cause all POI updates to be scheduled to the next tick, which technically breaks vanilla behavior and also breaks things like portal creation.

The BlockableEventLoop#execute method is as such:

public void execute(Runnable task) {
R runnable = this.wrapRunnable(task);
if (this.scheduleExecutables()) {
this.schedule(runnable);
} else {
this.doRunTask(runnable);
}
}

Essentially, it checks if it is the “main thread” or not(which no longer exists in Folia, but for this case we will assume the “main thread” is the region owning the BlockPos we are updating at), and if it is the “main thread”, it will run the task immediately, otherwise, it will schedule for the next tick.

Folia breaks this logic, as it always schedules for the next tick. While in normal gameplay, this is fine, but in some occurrences this does cause issues. Canvas fixes this behavior, restoring the Vanilla logic and fixing the issues that were caused by this change in behavior.

Vanilla ender pearl behavior is defined as when a player leaves, the in-flight pearls in the server that the player owns will be unloaded along with the player and saved to their player data. On join, the player loads the in-flight pearls back in. Folia removes this functionality entirely, since the system is technically not region-threading safe. There was a previous attempt by Canvas to fix this, however it was deemed unsafe. Currently, Canvas contains a new rewritten system for storing these in-flight pearls.

The new system makes it so that ONLY on leave it will save the pearls to a server-wide concurrent map of UUID -> Pearls. This is then saved to pearls.dat in the root folder of the server. When the player leaves, all pearls are added to the map on their owning region before being discarded for UNLOADED_WITH_PLAYER. On the global tick, the server will attempt to write to disk for autosave(same time interval as maps for autosave), with the save-all command, and on shutdown it will write to disk. This is all done through the utility IO pool. On join, each pearl is decoded, and then using Canvas utilities, spawns the pearl in on join, and then clears the current list in the save data associated with the player UUID, since we just spawned all of them, none of them need to be in the save data anymore.

This makes this significantly more region threading safe, given everything is now scheduled properly, and saved properly, on the correct context, in a more efficient and safe manner.

"": {
Data: {
// player UUID
a4804847-858e-3278-941e-ab3175b15a81: [
{
world: "minecraft:overworld"
data: {
// Pearl entity data
}
uuid: b679858f-af33-48ea-9323-509219dc6189
},
{
world: "minecraft:overworld"
data: {
// Pearl entity data
}
uuid: de1c429f-3f7c-4d02-8c9c-495ff94c0367
},
]
}
}

The end credits were disabled by Folia due to Folias rewrite of respawning logic. Folias respawning logic contains the method:

private void respawn(java.util.function.Consumer<ServerPlayer> respawnComplete, org.bukkit.event.player.PlayerRespawnEvent.RespawnReason reason, boolean alive)

This method contains all logic for respawning a player with region threading. Canvas essentially splits this method into the method and the “finalizer”. The finalizer is a Runnable that adds the player back to the world. Vanilla, when showing the end credits, removes the player from the world and waits on the packet ServerboundClientCommandPacket with the action, PERFORM_RESPAWN. If the player is in the end credits when this packet is received, the player has told the server they have exited the credits and is awaiting to be added back to the world. We split this method into its finalizer so we can replicate this process. We store the Runnable in the new field canvas$exitEndCreditsCallback in ServerPlayer, and is called when we receive this packet. When the player enters the end portal, we check if they have already seen the credits, and if they have we immediately run respawn and ignore storing the finalizer and just run it. If they haven’t, we store the finalizer after removing the player from the world and then send a ClientboundGameEventPacket with the game event WIN_GAME, to tell the client to display the end credits.

When the packet is received, we redirect the packet to the global region thread, because if we queue this packet as a task like normal, this packet will not be run because the player isn’t owned by any region, and as a result isn’t being ticked. Once the finalizer is run, the player is scheduled to respawn back at their respawn location.