Great! What is Stable Diffusion 3 doing when they swap a whole tensor layer when the T5xxl LLM is loaded in ComfyUI? That has been driving me nuts for months now. They don't use a LoRA, or custom model for alignment and there is some funky stuff happening in the model loader code and behavior. The layer swap see to be part of alignment, but I am dumfounded when it comes to breaking that down to something I can figure out.