I would interpret this as inode initialization being a task that can impose latencies and degraded throughput.
The goal of the code would be to arrange for it to run during a relatively idle period. Initializing the inode tables in advance, would avoid a latency hit ("lag") when you actually need the inode tables.
I think the suggestion is that it's better to have a quick install process, and then slightly degraded throughput for a while. While the install process is running, it's likely blocking you from doing useful things with the computer at the same time, for example
- checking your email
- reading the documentation from the packages installed on your system
finding your favourite desktop theme configuring your professional workspace
- booting back in to Windows where you have all your stuff
The ext4 mkfs option lazy_itable_init, which is now activated automatically when kernel support is detected, speeds up formatting ext4 filesystems during install. When the fs is mounted, the kernel begins zeroing the inode tables in the background. During install, this is a somewhat wasted effort and interferes with the copy process. Mounting the filesystem with the noinit_itable mount option disables the background initialization. This should help the install go a bit faster, and after rebooting when the fs is mounted without the flag, the background initialization will be completed.
https://bugs.launchpad.net/ubuntu/+source/partman-ext3/+bug/733652
This also points to a thread consisting mainly of rants by Ted T'so. The main point seems to be that inode checksums hadn't been implemented yet, which meant that a filesystem with un-zeroed inode tables would be significantly less robust against errors. Fortunately inode checksums were implemented within a year or so of that comment.