Recently I have tackled the ECS serialization which serialize the Chunk struct in a very straight forward way. Along the way I learned about it which I would like to summarize here.

Everything is currently based on preview 21. Selectable chunk size is coming and that would change things here.

The Chunk struct

Notice the X | Y comment. That’s the offset to the end of that field based on architecture. (64 | 32 bit)


Each contains 2 pointer type variable next and previous, so this is where the difference in offset came from. It allows each chunk to be a member of the whole network of chunks. Then we can quickly jump to things both existing chunk and find empty chunks next to this one.

  • On serialization : it is nulled out. On deserialization we relink to the neighbours again.


The EntityArchetype variable we know is just a wrapper over this Archetype pointer type. The archetype contains rich amount of data, but the actual data is not here in the chunk! That’s why it is a pointer. The actual data is kept by the ArchetypeManager , which is inside EntityManager , which is one per World .

  • On serialization : This field is replaced with an integer “archetype index”. On deserialization the number is mapped back to equivalent archetype (existing or new on the spot) by asking from ArchetypeManager again.


This pointer one point to the chunk’s tail. As you know associated SCD is a chunk things. Look at the GetSharedComponentOffset method also in the picture. You see it is offsetting from the end back by amount of SCD attached. We can deduce that the real data is right here at the end of the chunk.

  • On deserialization : It just try to find its own tail again and restore the pointer under new memory.

Count / Capacity

Count is the amount of Entity in the chunk. It is very important as it all ties to how the next one would be here or find a new chunk to be in.

Capacity is a precalculated max number of entity which can be in here. It is 100% precalculatable because the chunk’s size is fixed and also we know each component’s size thanks to struct high performance C# restriction. From there it is just some multiplication and division.

  • On serialization : These numbers are preserved.


A key to go to special storage which can stores any object per entity in this chunk. See here :

Padding0 / Padding2

This is some data size trick to make the struct align nicely. Note that Padding2 uses pointer size which would be 4 on 32-bit and 8 on 64-bit.


Chunk can store a version per component. It is like SharedComponentDataValueArray in that it points to this chunk’s tail. But further back from the shared indexes.

  • On deserialization : It just try to find its own tail again and restore the pointer under new memory.


This fixed buffer of 4 byte serves as an anchor point for “anything goes after here”.

That is, we will malloc kChunkSize first (16 * 1024–256) then put the beginning of that memory as Chunk* . Then if we ask for chunk->Buffer , we would be able to get the free memory for doing anything we want after all the chunk headers. Just make sure to not go over the end boundary.

The position of Buffer is different for 32 and 64 bit machine due to preceding pointers being different size. That means 32 bit machine got more chunk space to work with?

Right now we can visualize the chunk like this

GetChunkBufferSize questions

This method use the fixed chunk size 16 * 1024–256 = 16128 minus out the header (make sense) and then 4 more. I tried sizeof(Chunk) and it seems to be 88 and not 84 like expected. So that’s what 4 less for? I don’t know why.

But additionally that’s not the buffer size returned. It adds the size equal to how many SCD you have (for shared indexes) and also how many components you have (for change versions).

Because a chunk could hold many components and SCD (infinite?), the value of this method could go well over 16128. What does that means? First, “chunk buffer size” as defined by Unity is not a subset of kChunkSize that’s for sure.

From searching the source the calculation from this method is for calculating the Capacity by just divide it with each entity’s space. How about in the case that we have million components that the Capacity ended up very large?

In that case each entity’s space taken will also increase in relation to the component added. When both dividend and divisor are increased together, it is likely that Capacity is maintained at low number anyways.

And so even the number of this method returns well over chunk’s malloc memory, the actual write could not go over it when the capacity hits the limit.

Even if you managed to get the capacity large by some combination, that’s when the kMaximumEntityPerChunk comes into play. If capacity is over 16128/8 = 2016 then it is an error!

Not sure about tag component, in theory it won’t need any space in the “changed version” since there is nothing to change, and so not expanding the chunk space.