There are two things to think about here, as hinted at by other answers.
The first is File System Corruption. This relates to the meta-data structures that makes the file system usable, and is understood and controlled by the Kernel.
The second is the content of the files. When the content of a file is corrupted, the kernel will not know (or care). Complex systems like databases implement their own meta-data facilities to take care of this problem, but for most file types on a typical desktop system there is no such thing.
If you are editing a file, a "change" to the file might consist of writes to several locations. When all of these writes have been completed, the file is in a consistent state, but when only some of these writes are completed the file contents may be corrupted (inconsistent)
The operating system will (should) "group" related writes into a transaction. So, for example when a file grows, the data must be written to the blocks belonging to that file, and the file system structures be updated to allocate those new blocks to the file, and possible changes to the directory entry (eg last modification time) be updated, all as a single group. Once all of this is Sync'ed (flushed) to disk, the file system will be consistent again, but the file contents may not be until all the relevant writes have been submitted by the application and flushed to disk by the operating system.
If the application is in the middle of a complicated change and gets blocked before all the transactions have been issued, for example you press SAVE and immediately close your laptop lid, the entire group of changes may not all make it to the kernel's write queues.
Generally whatever is in the write queues will be flushed to disk. The file systems should, generally, be consistent. Due to File system journals though this might mean that some of those changes are temporarily in a log, albeit safely on disk.
File contents however is another story.