Monthly Archives: January 2012

Windows 8 VHDX file instant dedupe wish list

I have been testing the Windows 8 dedupe feature, especially so for large VHDX files. But the testing has revealed a major “wish”. Hopefully somebody from the right department at Microsoft reads this and at least puts it on a feature list for the future.

Here is a scenario I exercised – and it seems to be a very common scenario

  1. Create a Windows Server VM inside a 40GB VHDX file- call it VM1.vhdx
  2. Xcopy – (and yes – xcopy /J –see my previous blog “Tips for copying VHD and VHDX files”)  the VM1.vhdx  file to say VM2.vhdx. That’s 40 GB of reads and 40 GB of writes.  
  3. Repeat the xcopy to a different destination file – Xcopy /J VM1.vhdx to VM3.vhdx and that’s 40 GB more reads and 40GB more writes.
  4. Fire up VM1, enter license info, assign computer name, assign IP address, etc. Turn into a file server
  5. Fire up VM2, enter license info, etc, install Microsoft Exchange into a second VM and turn into an Exchange Server
  6. Fire up VM3, enter license info, etc and install SQL Server into a third VM and turn it into a SQL Server
  7. Now let the system idle, make sure it does not hibernate, wait for dedupe to read all 3 VHDX files ( 3 X 40 GB worth of reads, etc) and dedupe the files.

Instead, here is an alternative sequence that would be really useful

  1. Create a Windows Server VM inside a 40GB VHDX file
  2. Run a PS script that creates an instantly deduped second copy of this  VHDX file – with all the associated dedupe metadata. So now I have 2 VHDX files that are identical and have been deduped. The PS script would have to invoke some custom dedupe code Microsoft could ship. Create a new file entry for say VM2.vhdx and create the dedupe metadata for VM1.vhdx and VM2.vhdx.
  3. Repeat the same PS script with different parameters and now I have 3 identical VHDX files, all deduped
  4. Repeat steps 4 through 6 from the first sequence – step 7 – the dedupe step is not needed

This would save 100s of GBs of reads and writes, and administrator time, increasing productivity. Whether you call this instant dedupe or not is up to you.

In the interest of keeping the focus on the instant dedupe scenario, I have deliberately avoided the details of requiring Sysprep’ed installations. But the audience I am targeting with this blog will certainly understand the nuances of requiring Sysprep.

If you are a Microsoft MVP reading this blog, and you agree, please comment on the blog, and email your MVP lead asking for this feature.

Intel Ultrabooks, SSD based laptops, and file system needs

This blog is partly triggered by the new Resilient File System (ReFS) that Microsoft just announced for Windows 8. At least for now, the new file system appears to be more for servers than any laptops or tablets, and that too, particularly SSD based laptops and tablets. More about the ReFs in some other blogs.

For the record, I believe Intel holds a trademark on the term  Ultrabook.

I am not sure my Windows 7 and Windows 8 Developer Preview NTFS based laptops need a better metadata checksum mechanism, let alone a better user data checksum mechanism. But here is what I do believe my NTFS based laptops (Win 7 and presumably also the Win 8 based laptops) need, especially so when the hard disk is an SSD.

  • Can the OEMs please stop bundling and/or stop offering a disk defragmentation utility with SSD based systems? SSD based volumes do not need to be defragmented, indeed, they reduce the life of the SSD! Further, maybe the Microsoft OEM division, especially so for Windows 8, as well as the Intel Ultrabook division can do something about this?
  • Microsoft, thank you for disabling the built in defrag code in Windows 7 when an SSD based NTFS volume is detected. Hopefully the same is true in Windows 8 as well.
  • SSDs have this need for the unused data blocks to be erased. That is just the nature of the physics involved. Doing so makes the writes faster. After a user buys an SSD based laptop, after a while, all of the blocks have been written and it would be advantageous for the disk firmware to know which blocks are unused as seen by the file system (NTFS) so that it can go ahead and erase them. The idea is that the disk blocks are erased and ready for me to write to when I download the latest movie, whether from iTunes, YouTube, or my DVD drive. Enter the Windows 7 “TRIM” command where Windows 7 passes down the information as to what disk blocks it just “released” and that can be erased. The problem is that it is not clear which drive vendors make use of that TRIM command? Or when a new version of the driver firmware makes use of that TRIM command, do the laptop OEMs bother to use that firmware? I understand there are profits to be made, and that goal may at least temporarily result in a situation where a Windows 7 SSD volume is simply ignoring the TRIM command. It would be interesting to get those statistics – whether it be from Microsoft, or a drive vendor, or a laptop OEM, or for that matter Intel for its Ultrabook branded OEMs.
  • What would be even more useful would be to have the same insight for Windows 8 SSD based laptops and tablets, whenever those are commercially available. While I will not buy a laptop or tablet simply because it makes proper use of the TRIM command, I do know that 64GB and 128 GB SSDs tend to fill very quickly, and hence the TRIM command will help. So it is certainly an important consideration. Perhaps this is one way an OEM can differentiate their offering.

More about ReFS and SSD in a new blog  at a later date.

Tips for copying large VHD and VHDX files

I have been copying VHD files for a while and have been partly putting up with some issues, but finally devoted the time to look at the issues a little closer.

I have 2 different systems running Windows Server 2008 R2, one with 4GB of physical RAM and one with 16GB of physical RAM. Obviously these are developer systems and certainly not production systems. The problem happens when I copy large VHD files, large being defined as anything significantly larger than the amount of physical RAM on the system that is doing the copying. So for these 2 systems, say anything larger than 20GB in size.

I used copy or xcopy with the default options to copy the large VHD file from one local volume to another. Due investigation showed that the physical RAM in use grew to 100% and stayed there while the copy was happening. I was careful not to run any other tasks on either system. It also appeared that the physical RAM was all being consumed by the Cache Manager.

Once the physical RAM usage hit 100% (as observed via PerfMon), I tried starting up NotePad. In a highly unscientific study that does not have enough data points, I found that at least half the times, the system “hiccupped” noticeably before NotePad ran – it took a while even for me to be able to type “Start Run notepad”.  There certainly were times, especially so early in the copy process, that the systems were highly responsive, even with physical RAM usage at 100%.

My speculation – and I have not done any investigation to verify – is that the physical RAM is being consumed by the Cache Manager for 2 different purposes. One is read ahead of the source VHD file, and the second is caching the data written into the destination VHD file. The cache manager is more willing – and able – to give up memory that has read ahead cached data. The cache manager has to work harder – and will consume system resources – when it needs to free up RAM that has cached data written to the destination VHD file.

Looking around, I noticed that the problem of copying large files on Windows seems to be a well known problem. The Microsoft performance team has written a blog “Slow large file copy issues”. Clearly they conclude that the solution is to copy the large file non cached. While the Microsoft Performance team suggests using Exchange EseUtil, I am not sure how many of my readers have access to that utility. I will also point out that I don’t understand the legal issues in taking an utility that ships with Microsoft Exchange and copying it to a non Microsoft Exchange system!

The simpler solution, again as the Microsoft Performance Team advocates, is using the Windows 7 or Windows Server 2008 R2 xcopy utility and making sure to specify the /J option indicating that the file should be opened and copied in a non-cached manner.

My same unscientific testing shows that xcopy /J works well in copying large VHD files. Someday, I will trace this to figure out whether xcopy /J performs non-cached I/O on both the source and destination VHD files, or on just one of them.

In the meanwhile, do certainly consider using xcopy /J to copy large VHD  and VHDX files.

NTFS volume defragmentation – Part 3 – supply NTFS even more information

Earlier blogs described the problem of supplying NTFS enough information so that it could make an appropriate placement of a file on an NTFS volume to avoid fragmentation. We concluded that setting the file size information using a SetFileInformation API was not sufficient.

I updated my code – here is the pseudo code from the previous blog with one modification – this in attempt to provide NTFS even more information.

Open source file

Open destination file

GetFileSizeInformationForSourceFile();

SetFileSizeInformationForDestinationFile();

SetFileValidData();          // for destination file

While (!EndOfSourceFile)

{

                Read(SourceFile)

                CheckForEndOfFile

                WriteToDestinationFile (including write a partial buffer if any)

}

Close source file

Close destination file

The one modification is the call to SetFileValidData. As the MSDN documentation points out, this has security implications in that any data “left over” on the disk blocks allocated to the file can now be potentially available to anybody who can open the file. But since our copy applet will fully write all blocks of the file, this security concern does not apply in our case.

Contig now shows that the file is unfragmented!

Would it be possible for NTFS to determine the file size and allocated disk blocks accordingly without the need of this API? Only somebody from the NTFS team at Microsoft can answer that question.

But in the meanwhile, application developers have a way to make sure that they supply NTFS all the information it needs to try and avoid file fragmentation.

 

 

NTFS volume defragmentation – Part 2 – supplying NTFS more information

In an earlier blog, I described how developers typically ask NTFS to place a file on a volume without providing NTFS enough information to ensure the file placement does not lead to fragmentation. In particular, a typical application such as a file copy application does not provide the file size information before the first few blocks of the file are placed on disk.

Here is the same program from the earlier blog with a few additional steps

Open source file

Open destination file

GetFileSizeInformationForSourceFile();

SetFileSizeInformationForDestinationFile();

While (!EndOfSourceFile)

{

                Read(SourceFile)

                CheckForEndOfFile

                WriteToDestinationFile (including write a partial buffer if any)

}

Close source file

Close destination file

Obviously this is pseudo code that is meant to convey intentions and not code that can be compiled. The main step here is to determine the size of the source file, and then set the size of the destination file to that size – and especially so, do that before the first write occurs to the destination file.

After doing this, I inspected the destination file fragments using the SysInternals tool contig – and found that the file still tended to be fragmented. The expectation was that when the Cache Manager flushes and asks NTFS to commit some parts of the file to disk, NTFS could perhaps retrieve the file size from the open file handle – the file size being set via the SetFileSizeInformationForDestinationFile call. But this is clearly not the case. At least not for Windows 7 or Windows Vista or Windows XP that I tested with.

The third and last part of the blog will examine how to provide NTFS the information it needs to properly place the file on volume and avoid fragmentation.