Frequently asked questions (FAQs) about VMware on FlashArray - Part 1: vDisk types - Basics
[ NOTE: machine translation with the help of DeepL translator without additional proofreading and spell checking ]
I don't know how many times I've had the discussions about the ideal virtual disk/(vDisk) type of virtual machine, but it definitely repeats itself on a regular basis. A reason to dedicate a blog post to this topic.
Basically - let's limit ourselves to VMware vSphere - there are three virtual disk formats:
thin
(lazy) zeroed thick (LZT)
eager zeroed thick (EZT)
Thin - virtual "thin" provisioned disks occupy only what is actually used by the guest operating system. After the vDisk is created, only one block of memory is consumed. As new data is written to the guest OS, new blocks are allocated on the VMFS file system and "zeroed out" when the data is written to storage. This causes the vDisk size to grow dynamically. There is additional latency with each new write.
Latency during write operation due to: allocate + "zero" the blocks
Zeroed thick - with LZT vDisks, memory is allocated on the VMFS file system. As soon as the guest operating system writes to a certain block for the first time, the block is "zeroed" and the data is written. There is also additional latency here, but lower latency than with thin vDisks.
Latency during the write process due to: "zeroing" of the blocks
Eager zeroed thick - With EZT vDisks, the provisioned disk capacity is already fully occupied and "zeroed" at creation. The vDisk can only be used when zeroing is complete. With EZT there is no "first-write latency" because the allocation + zeroing has already been performed in advance.
Latency during write operation: none
The performance between the individual vDisk formats used to be noticeably different, since the unallocated block had to be zeroed first and then the actual data could be written. So basically with every new block there were two writes.
To solve these problems VMware introduced the functionality "WRITE SAME". Here a SCSI command is executed that tells the storage target to write patterns/zeros. The ESXi uses WRITE SAME to avoid writing zeros across the SAN and communicates to the array to write zeros to a specific location on the storage device.
Pure Storage FlashArray further optimizes WRITE SAME: since the FlashArray does not store identical patterns (contiguous zeros) on the system, these zeros are discarded and the time for pre-zeroing for the zeros is reduced.

Conclusion
In general, Pure Storage recommends:
Use thin provisioned vDisks as these offer the greatest flexibility and the difference in performance is only noticeable for high/highest traffic applications.
For highest traffic applications with maximum performance (see 1.) use eager zeroed thick vDisks.
Do not use thin provisioned vDisks for templates, but zeroed thick vDisks (Why? - in the second part of the series "VMware on FlashArray).
Pure Storage does not recommend: using lazy zeroed thick vDisks, as this has little advantage ("stranded space") compared to thin provisioned vDisks.

From the storage point of view of the FlashArray, it does not matter whether Thin or Thick ... due to the integrated data reduction (compression, deduplication) identical patterns (zeros) are always written only once.
In the next article of the series "VMware on FlashArray" we go deeper into the topic vDisks and look at the effects of the vDisk types on functions like Space Reclamation and XCOPY.
More info - Links
All officially published setting options in the GUI but also CLI can be read via the "on-board" user guides of the Pure Storage systems.
Click on "Help" in the Purity main menu.
The User Guide is structured like the main menu and can be opened downwards. A search function is also integrated - within here you can also search for keywords.
WEB: Pure Storage (Pure1) support portal - Ticket system and support *(requires registered FlashSystems)
PHONE: Pure Storage phone support: GER - (+49) (0)800 7239467; INTERNATIONAL - (+1) 650 7294088
WEB: Pure Storage OFFICIAL blog