Author Archives: dilipcnaik

Windows Server 8 encrypts user data in flight between client and server

I have been at the MVP Summit in Redmond, and have been carefully noting “advances” between Windows Server 8 Developer Preview (made available at the BUILD conference in Anaheim) and the newer Windows Server 8 Beta build made available on Feb 29 2012.

One new feature is that the SMB 2.2 protocol used for client/server file data exchange now allows for data encryption. Both the client and the server need to be running the new SMB 2.2 protocol – so this means both need to be Windows 8, or the Server needs to be a fairly new one from a vendor that has implemented SMB 2.2

Verticals such as the health care industry should find this particularly useful. The huge overhead of using a solution such as IPSEC can now be avoided.

Details such as what encryption algorithm, as well as performance characteristics of this encryption will be blogged about, once I perform some experiments with the beta bits.

The only public Microsoft information as of today is at


Windows 8 VHDX file instant dedupe wish list

I have been testing the Windows 8 dedupe feature, especially so for large VHDX files. But the testing has revealed a major “wish”. Hopefully somebody from the right department at Microsoft reads this and at least puts it on a feature list for the future.

Here is a scenario I exercised – and it seems to be a very common scenario

  1. Create a Windows Server VM inside a 40GB VHDX file- call it VM1.vhdx
  2. Xcopy – (and yes – xcopy /J –see my previous blog “Tips for copying VHD and VHDX files”)  the VM1.vhdx  file to say VM2.vhdx. That’s 40 GB of reads and 40 GB of writes.  
  3. Repeat the xcopy to a different destination file – Xcopy /J VM1.vhdx to VM3.vhdx and that’s 40 GB more reads and 40GB more writes.
  4. Fire up VM1, enter license info, assign computer name, assign IP address, etc. Turn into a file server
  5. Fire up VM2, enter license info, etc, install Microsoft Exchange into a second VM and turn into an Exchange Server
  6. Fire up VM3, enter license info, etc and install SQL Server into a third VM and turn it into a SQL Server
  7. Now let the system idle, make sure it does not hibernate, wait for dedupe to read all 3 VHDX files ( 3 X 40 GB worth of reads, etc) and dedupe the files.

Instead, here is an alternative sequence that would be really useful

  1. Create a Windows Server VM inside a 40GB VHDX file
  2. Run a PS script that creates an instantly deduped second copy of this  VHDX file – with all the associated dedupe metadata. So now I have 2 VHDX files that are identical and have been deduped. The PS script would have to invoke some custom dedupe code Microsoft could ship. Create a new file entry for say VM2.vhdx and create the dedupe metadata for VM1.vhdx and VM2.vhdx.
  3. Repeat the same PS script with different parameters and now I have 3 identical VHDX files, all deduped
  4. Repeat steps 4 through 6 from the first sequence – step 7 – the dedupe step is not needed

This would save 100s of GBs of reads and writes, and administrator time, increasing productivity. Whether you call this instant dedupe or not is up to you.

In the interest of keeping the focus on the instant dedupe scenario, I have deliberately avoided the details of requiring Sysprep’ed installations. But the audience I am targeting with this blog will certainly understand the nuances of requiring Sysprep.

If you are a Microsoft MVP reading this blog, and you agree, please comment on the blog, and email your MVP lead asking for this feature.

Intel Ultrabooks, SSD based laptops, and file system needs

This blog is partly triggered by the new Resilient File System (ReFS) that Microsoft just announced for Windows 8. At least for now, the new file system appears to be more for servers than any laptops or tablets, and that too, particularly SSD based laptops and tablets. More about the ReFs in some other blogs.

For the record, I believe Intel holds a trademark on the term  Ultrabook.

I am not sure my Windows 7 and Windows 8 Developer Preview NTFS based laptops need a better metadata checksum mechanism, let alone a better user data checksum mechanism. But here is what I do believe my NTFS based laptops (Win 7 and presumably also the Win 8 based laptops) need, especially so when the hard disk is an SSD.

  • Can the OEMs please stop bundling and/or stop offering a disk defragmentation utility with SSD based systems? SSD based volumes do not need to be defragmented, indeed, they reduce the life of the SSD! Further, maybe the Microsoft OEM division, especially so for Windows 8, as well as the Intel Ultrabook division can do something about this?
  • Microsoft, thank you for disabling the built in defrag code in Windows 7 when an SSD based NTFS volume is detected. Hopefully the same is true in Windows 8 as well.
  • SSDs have this need for the unused data blocks to be erased. That is just the nature of the physics involved. Doing so makes the writes faster. After a user buys an SSD based laptop, after a while, all of the blocks have been written and it would be advantageous for the disk firmware to know which blocks are unused as seen by the file system (NTFS) so that it can go ahead and erase them. The idea is that the disk blocks are erased and ready for me to write to when I download the latest movie, whether from iTunes, YouTube, or my DVD drive. Enter the Windows 7 “TRIM” command where Windows 7 passes down the information as to what disk blocks it just “released” and that can be erased. The problem is that it is not clear which drive vendors make use of that TRIM command? Or when a new version of the driver firmware makes use of that TRIM command, do the laptop OEMs bother to use that firmware? I understand there are profits to be made, and that goal may at least temporarily result in a situation where a Windows 7 SSD volume is simply ignoring the TRIM command. It would be interesting to get those statistics – whether it be from Microsoft, or a drive vendor, or a laptop OEM, or for that matter Intel for its Ultrabook branded OEMs.
  • What would be even more useful would be to have the same insight for Windows 8 SSD based laptops and tablets, whenever those are commercially available. While I will not buy a laptop or tablet simply because it makes proper use of the TRIM command, I do know that 64GB and 128 GB SSDs tend to fill very quickly, and hence the TRIM command will help. So it is certainly an important consideration. Perhaps this is one way an OEM can differentiate their offering.

More about ReFS and SSD in a new blog  at a later date.

Tips for copying large VHD and VHDX files

I have been copying VHD files for a while and have been partly putting up with some issues, but finally devoted the time to look at the issues a little closer.

I have 2 different systems running Windows Server 2008 R2, one with 4GB of physical RAM and one with 16GB of physical RAM. Obviously these are developer systems and certainly not production systems. The problem happens when I copy large VHD files, large being defined as anything significantly larger than the amount of physical RAM on the system that is doing the copying. So for these 2 systems, say anything larger than 20GB in size.

I used copy or xcopy with the default options to copy the large VHD file from one local volume to another. Due investigation showed that the physical RAM in use grew to 100% and stayed there while the copy was happening. I was careful not to run any other tasks on either system. It also appeared that the physical RAM was all being consumed by the Cache Manager.

Once the physical RAM usage hit 100% (as observed via PerfMon), I tried starting up NotePad. In a highly unscientific study that does not have enough data points, I found that at least half the times, the system “hiccupped” noticeably before NotePad ran – it took a while even for me to be able to type “Start Run notepad”.  There certainly were times, especially so early in the copy process, that the systems were highly responsive, even with physical RAM usage at 100%.

My speculation – and I have not done any investigation to verify – is that the physical RAM is being consumed by the Cache Manager for 2 different purposes. One is read ahead of the source VHD file, and the second is caching the data written into the destination VHD file. The cache manager is more willing – and able – to give up memory that has read ahead cached data. The cache manager has to work harder – and will consume system resources – when it needs to free up RAM that has cached data written to the destination VHD file.

Looking around, I noticed that the problem of copying large files on Windows seems to be a well known problem. The Microsoft performance team has written a blog “Slow large file copy issues”. Clearly they conclude that the solution is to copy the large file non cached. While the Microsoft Performance team suggests using Exchange EseUtil, I am not sure how many of my readers have access to that utility. I will also point out that I don’t understand the legal issues in taking an utility that ships with Microsoft Exchange and copying it to a non Microsoft Exchange system!

The simpler solution, again as the Microsoft Performance Team advocates, is using the Windows 7 or Windows Server 2008 R2 xcopy utility and making sure to specify the /J option indicating that the file should be opened and copied in a non-cached manner.

My same unscientific testing shows that xcopy /J works well in copying large VHD files. Someday, I will trace this to figure out whether xcopy /J performs non-cached I/O on both the source and destination VHD files, or on just one of them.

In the meanwhile, do certainly consider using xcopy /J to copy large VHD  and VHDX files.

NTFS volume defragmentation – Part 3 – supply NTFS even more information

Earlier blogs described the problem of supplying NTFS enough information so that it could make an appropriate placement of a file on an NTFS volume to avoid fragmentation. We concluded that setting the file size information using a SetFileInformation API was not sufficient.

I updated my code – here is the pseudo code from the previous blog with one modification – this in attempt to provide NTFS even more information.

Open source file

Open destination file



SetFileValidData();          // for destination file

While (!EndOfSourceFile)




                WriteToDestinationFile (including write a partial buffer if any)


Close source file

Close destination file

The one modification is the call to SetFileValidData. As the MSDN documentation points out, this has security implications in that any data “left over” on the disk blocks allocated to the file can now be potentially available to anybody who can open the file. But since our copy applet will fully write all blocks of the file, this security concern does not apply in our case.

Contig now shows that the file is unfragmented!

Would it be possible for NTFS to determine the file size and allocated disk blocks accordingly without the need of this API? Only somebody from the NTFS team at Microsoft can answer that question.

But in the meanwhile, application developers have a way to make sure that they supply NTFS all the information it needs to try and avoid file fragmentation.



NTFS volume defragmentation – Part 2 – supplying NTFS more information

In an earlier blog, I described how developers typically ask NTFS to place a file on a volume without providing NTFS enough information to ensure the file placement does not lead to fragmentation. In particular, a typical application such as a file copy application does not provide the file size information before the first few blocks of the file are placed on disk.

Here is the same program from the earlier blog with a few additional steps

Open source file

Open destination file



While (!EndOfSourceFile)




                WriteToDestinationFile (including write a partial buffer if any)


Close source file

Close destination file

Obviously this is pseudo code that is meant to convey intentions and not code that can be compiled. The main step here is to determine the size of the source file, and then set the size of the destination file to that size – and especially so, do that before the first write occurs to the destination file.

After doing this, I inspected the destination file fragments using the SysInternals tool contig – and found that the file still tended to be fragmented. The expectation was that when the Cache Manager flushes and asks NTFS to commit some parts of the file to disk, NTFS could perhaps retrieve the file size from the open file handle – the file size being set via the SetFileSizeInformationForDestinationFile call. But this is clearly not the case. At least not for Windows 7 or Windows Vista or Windows XP that I tested with.

The third and last part of the blog will examine how to provide NTFS the information it needs to properly place the file on volume and avoid fragmentation.

Notes on Windows 8 VHDX file format

Last year, I wrote an article on Technet regarding the performance issues with the dynamic VHD file format. That article can be fund here . The main point made in the article was that the dynamic VHD file format ensured that 3 out of every 4 2MB data blocks were misaligned in the sense that they did not begin on a 4096 byte boundary. With the future of large sector disks, this will become even more important.

With Windows 8, Microsoft appears to have addressed this issue with the new VHDX file format. As disclosed in some talks at the BUILD conference, the VHDX file format

  • Extends VHD file based volumes to 16TB from the current 2TB limit
  • Improves performance
  • Makes the file format more resilient to corruption

While the best way to evaluate these features would be to compare the VHDX file format with the VHD file format, unfortunately Microsoft has not yet released the VHDX file format.

I decided to look into this a little further and wrote a little applet that writes 4kb at offset zero in a file, and 4kb at offset 2MB in a file. I ran the program twice

  • Once with the file housed in a VHD based volume
  • Once with the file housed in a VHDX based volume

 Given that the VHD file format in Windows Server 2008 R2 uses 2MB data blocks, the applet effectively writes at the beginning of the first 2 data blocks.

While I plan to analyze the I/Os in more detail, for now, there is one interesting observation. Here are the I/Os on the VHD file traced using Process Monitor in the Windows Server 2008 R2 parent.

And here are the I/Os traced with the file housed in a VHDX based volume

The immediate observation is that there are some 512 byte writes on the VHD based volume whereas the VHDX based volume shows no 512 byte writes at all. The 512 byte writes are presumably the writes to the Sector bitmap of the VHD (file format). While the conclusion is not definitive, one is drawn to believe that the 512 byte sector bitmap has been replaced and/or moved – maybe the sector bitmaps are now all together and not interspersed between data blocks.

More on this topic in a later blog.

NTFS volume defragmentation – Part 1 – a developer perspective

This blog is about NTFS volume fragmentation from a developer perspective.

As a developer, my perspective is that many applets, including, but not limited to Microsoft tools and utilities, provide NTFS with insufficient information to place a file on a volume so as to certainly avoid defragmentation.

As an example, assume we are coding a file copy applet. The typical applet, with some oversimplification, might look something like this

Open source file

Open destination file

While (!EndOfSourceFile)




                WriteToDestinationFile (including write a partial buffer if any)


Close source file

Close destination file

The example ignores the code to get and set file attributes and ACLs, purely to concentrate on the fragmentation that occurs while writing the default data stream.

Assume both files are buffered. At some point in time – and this point is typically non deterministic – it depends upon system state including what else is in the Cache Manager buffers, what other apps are running etc. the Cache Manager will decide to flush the destination file. At this point in time, the Cache Manager will have N buffers cached, with N being a number depending upon system state. If you copy the same file one more time, N may be the same, or may be different, when the Cache Manager decides to flush the file buffers during that second copy.

Now consider what NTFS needs to do. It needs to decide where to place the destination file, and write out the buffers being flushed. In effect, the copy applet we wrote is telling NTFS:

  1. Find a place for the file on the volume
  2. I am not telling you how big the file is, but make sure it does not cause fragmentation!

Unconfirmed rumors are that NTFS guesses the file size to be twice the size of the Cache Manager contents and based on that assumption, places the file on the volume. Whether that is true or not is immaterial.

The takeaway is that NTFS needs to know the size of a file before it decides where to optimally place the file – and very often, that information about the file size is unavailable when NTFS is asked to make that file placement decision.

A future blog will explore this further and examine what APIs an application can invoke to provide NTFS with all the information it needs to avoid fragmentation.

Slow file copy or slow file transfer with various Windows versions 2k8, 2k8R2, 2k3

There are various posts in the Microsoft Technet File Systems and Storage forum (and other forums) indicating slow file copy/transfer speed. The details vary, but most of them involve either Windows Server 2008, Windows Server 2008 R2, or Windows Server 2003. This document is an attempt to coalesce all the various solutions into a single place. Note that the author of this document has not personally tested any of the solutions, but various posts in the forum indicate these to have solved the problem for some users.

Symptom: File copy or File transfer speed is slow, either to a local drive or a network drive, especially so when a Windows Server 2008 or newer Windows file server is involved.

Possible Solutions in no particular Order

  • Make sure both IPv6 and IPv4 are running on 2008 R2, even if the 2008 R2 server is the lone IPv6 device on the network! There may be alternate solutions but this solution has been reported to work [1] [2]
  • Tune TCP on client – set autotuning to off “netsh interface tcp set global autotuninglevel=disabled” [2]
  • Tune TCP on client and server – turn off receive side scaling “netsh interface tcp set global rss=disabled” [3]
  • Tune TCP disable large send offload [5] [17]
  • Tune TCP disable large receive offload [11]
  • Tune TCP – disable  offload [5] [7][9] [11] [16]

Edit the registry key Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters; create a new DWORD value named DisableTaskOffload and set its value to 1

  • Use non-cached writes while copying the file e.g. run Windows 7 xcopy which supports a new /J switch to do non-cached writes. The Microsoft Performance Team uses the Microsoft Exchange utility eseutil to copy files using non-cached writes. However, some Microsoft customers may not have Microsoft Exchange. Also, the legalities of copying files from a Microsoft Exchange server to a File Server as the Microsoft Performance Team blog in Reference [6] suggests are beyond the scope of this compilation of solutions. [6] [14]
  • Install all hotfixes from Microsoft [8]
  • Don’t use File Explorer to do copy/paste – and if you do, ensure Remote Differential Copy is turned off [4]
  • If one of the drives involved in the file copy is an external USB drive, ensure that you are using the USB 2 protocol and not the older slower USB 1. Start Device Manager, find USB Controller, clock USB2 Enhanced Controller Host, and right click properties; make sure it is enabled. Repeat these steps for all USB2 Enhanced controller hosts.
  • Make sure drive compression is turned off [10]
  • Make sure antivirus is fully updated or if possible, remove antivirus solutions from the equation to see if that is the cause [12]
  • Install all updated drivers from vendors especially NIC vendors and RAID drivers [13] [16]


  1. What causes Windows 2008 R2 slow file and folder copy
  2. Server 2008 R2 core slow file copy
  3. Windows 2008 R2 File and Folder Copy very slow over 1000MBs link
  4. Why is Windows 7 still as slow as Vista with File Copy
  5. Slow network file copy on Windows 7
  6. Slow large file copy issues  Microsoft Performance team blog
  7. Slow file copy between Vista and Small Biz Server 
  8. Low performance when you transfer files from external IEEE 1394 device to server
  9. Very slow DFSR on Windows Server 2008
  10. Server 2008 File transfer is slow
  11. File Copy very slow to Windows Server 2008 R2
  12. IPS driver causes network slowdown
  13. Network transfers start fast then slow down
  14. Windows 2008 R2 large file copy uses all available memory and slows down  
  15. Slow LAN transfer Windows Server 2008 R2 – enable IPv6
  16. Windows Server 2008 R2 file/folder copy/paste very slow 
  17. Windows 7 Large File copy painfully slow 


SMB 2.2 and other Windows 8 protocol documentation

With Windows 8, Microsoft is significantly pushing the envelope on Windows Server performance and high availability with Windows 8, while also enabling these features with low cost Network Attached Storage.

Recall the days when Microsoft was fined 2 million odd dollars per day for not publishing proper documentation for various protocols. What is remarkable is that while Windows 8 has not even reached a Beta stage, the documentation for the various different new protocols in Windows 8 has already been published at this MSDN link. In no particular order, here is a summary of the various storage and NAS protocols you will find at that MSDN link

  1. Windows 8 will support Hyper-V and SQL storing VHD files on an SMB 2.2 NAS share. A new File Server Remote VSS Protocol will enable shadow copy based backups (and restores) of these NAS shares.
  2. The new SMB 2.2 protocol that
    1. Allows clients to obtain leases on directories and not just files
    2. Extensions to the SMB 2 protocol that allow for a client to form multi-channel based multiple connections to a server and have the data flow in parallel across all channels, thus providing bandwidth aggregation as well resiliency by dropping down to remaining existing channels when a given channel fails
    3. Allows clients to retrieve hashes for ranges of a file – this SMB is used in branch cache scenarios
    4. A new SMB over RDMA protocol that allows for setting up and SMB 2 based client/server connection and the transferring data using an RDMA capable transport such as iWarp or Infiniband
    5. A Storage Services protocol that provides for scenarios such as creating modifying volumes and shares, creating and managing shadow copies, etc.

Notably absent so far is a new Witness Notification Protocol document that forms the basis for high availability NAS shares and is the basis of the highly significant enhancements to have clients transparently fail over to new servers/shares as needed, without needing to re-open files. Presumably, this document and other remaining documents will get posted soon.

Note that all of the documents are marked preliminary. This is not surprising, given that Windows 8 has not yet reached a Beta stage, and presumably Microsoft reserves the right to make modifications as needed