Deployment Solution

All USB Flash Drives are Not Created Equal 

07-31-2012 12:11 PM

In our environment, we initiate nearly 20 imaging sessions weekly for migrations and PC rebuilds. As most of these deployments use USB Flash drives to accelerate imaging, it’s important to understand performance concerns as they arise.

Over the last few days I’ve been tackling reports of performance issues experienced by our techs whilst imaging. The results were a little surprising and once more reinforce that mantra that all USB Flash drives are not created equal.


A few years back, we enhanced our Altiris imaging solution to take advantage of USB Flash drives. The requirement we had was simple – we needed to turn around a computer deployment in 30 minutes no matter where the target computer was located. Further, as network friendliness was a must, computer deployment was not permitted to impact on the target site’s local network or links.

So, we wrote into our deployment scripts an image ‘hunt’ option which meant that network based imaging sessions were only initiated in the event that a local copy of the image could not be found.

We expected these scripts to improve our deployment times significantly. To see why, let’s look at some figures,

  • 100 Mbps LAN
    Theoretical bandwidth of 12.5MB/s which is shared across many machines.  Generally accepted that with protocol overheads that in practice bandwidth limited to ~11MB/s.
  • USB 2.0 Interface (High Speed)
    Theoretical bandwidth of 60MB/s which is exclusive to the host machine. Generally accepted though that in practice to be limited to about ~35MB/s

So, all things being equal, we could reasonably expect to deploy a computer about 3 times faster when using a USB flash drive based on USB 2 technology. Further, we could reasonably anticipate our deployment times to be more reliable as the USB imaging mechanism isolates traffic to the host.

These practical limits are all well and good, but what about reality? In our environment for example, our experience of LAN based imaging was that speeds were typically around 150MB/min (2.5MB/s). USB  Flash transfer rates seemed also to sit around  15MB/s.

In reality then, implementing our USB imaging solution could reasonably be expected to accelerate our deployments by a whopping factor of 6. For the first time, deskside imaging could be implemented on a practical timescale of just 10 minutes rather than an hour.

Note:  I’ve neglected the USB 3 (Super Speed) technology here as we don’t have any machines or flash drives of that spec. For completeness though, this has a theoretical bandwidth of 625MB/s which is exclusive to the host machine, which means perhaps 400MB/s achievable in practice. USB 3.0 devices therefore have the potential to be blisteringly fast.


The Problem

So, the report coming in from a tech was that USB imaging deployment times had suddenly become unreliable. This was a little disconcerting – the imaging environment has been pretty stable for the last couple of years with changes coming in only every 8 months or so.

Although the problem was limited to one tech, it was clear that without understanding the cause, we had no way to judge its true scope. With a major rollout starting in just a few days, I decided it was prudent to put aside my workload for a couple of days to figure out just what was up.


USB Flash Drive Performance Summary

To diagnose the problem, I got hold of as many USB Flash drives as I could to fairly represent what was currently in active use by our techs. I then booted a Dell Optiplex into Altiris Linux Automation, and started performing some coarsely timed file copy tests. From looking at the results there I got suspicious; I soon moved onto more detailed benchmarking using the Windows utility HD Tune.

Below I’ve compiled the results for the 5 Kingston Data Traveller drive models that we’ve got deployed,


Before delving into the results I should clarify that,

  1. All the Flash drives were benchmarked on Windows 7 32-bit (home-built PC), Windows 7 64-bit (Dell Latitude E6320, Dell Optiplex 980) and Altiris Linux automation (Dell Optiplex 990). Good, broad agreement across all systems with respect to transfer rates.
  2. I have sampled at least two drives in each model range to confirm I haven’t got a random dud drive. Again, good agreements found (apart from one drive which I’ll discuss later).
  3. In Windows, I used HD Tune to get the transfer rates.  In Linux Automation I used the Linux’s dd command.


Looking at the results, I could make a pretty good guess that those experiencing imaging problems are using the latest high-capacity DT 101 drives. This 64GB variant of the Generation 2 drives is putting on an exceptionally poor show at 9MB/s. In fact, this is on par with my aged DT 2.0 Drive I purchased back in 2003.

It is also interesting to note that none of the drives we have come even close to the 35MB/s throughput ceiling. My 500GB USB Buffallo drive can max this out (a mechanical disk) and my guess is that in the flash drive market,  only super-speed USB 3 flash drives will be able to max out the USB 2 interface.

In short –we had to recall these 64GB drives pronto from the techs. Whilst the capacity gave more flexibility in terms of image storage, the performance figures rendered them useless for our particular requirement. This is quite a surprise as they were ordered as a safe-bet as the 32GB drives in the same range were absolutely fine.


The Random Dud Drive

In my initial Linux testing, I experienced a problem which was oddly reminiscent of performance degradation in mechanical harddisks for just one particular drive. You see, when I moved from testing 500MB file transfers to 1GB file transfers I noticed a significant (and confusing) drop in performance.

The reason why this was confusing is simple:  A Flash drive’s performance is not expected to vary as you work through the drive from beginning to end. This is because the overhead in reading a bit from a flash drive is purely electrical in nature and is independent of the location of the bit being accessed. So drive performance metrics like throughput and access times should be fixed metrics.

The figure below tries to illustrate this by taking an HD Tune drive benchmark from a physical disk, and overlaying it with what you can expect from a USB Flash device.



Breaking down the graph by the drive technologies, we have

  •  Mechanical Drive
    The classic performance staircase with disk drives which shows the drive performance decrease as we move away from the outer tracks. Here we see the drive performance drop from nearly 60MB/s to 30MB/s as we move from the outermost tracks to the inner most ones.

    Although the access time scatter plot is interesting, its details are not important for the purposes of this article. It is due to the combination of the drives mechanical limitations –the head seek movement and the drive’s rotational RPM. For more information on this see my “Getting The Hang Of IOPS” article on Symantec Connect.
  • USB Flash Drive
    Here I’ve depicted a fictitious USB Flash drive which maxes out the USB 2 interface at 35MB/s. This performance here is a horizontal line as the electrical overheads in retrieving data do not depend on where the data is on the flash drive. In practice, you will see some minor structure here, but these fluctuations will invariably be less than 1MB/s.  

    The access times of the flash drive is also staggeringly low. Generally USB flash drives show access times of below 1.5ms. Again, as the overheads in data access are purely electrical in nature, these times do not vary as we sweep across the drive’s capacity.

Now we have an idea of what we should be seeing, let’s see what the benchmark results were from the offending 8GB drive;

This is certainly not the flat line we were expecting. Further, as this benchmark was repeated across three different machines I’m pretty convinced this profile’s characteristics are due to the drive itself.

What we’re seeing here is the drive’s throughput essentially switching from 24MB to 13MB/s after we’ve moved past the first 30% of the drive’s capacity. Essentially, if you imaged a computer with this drive you’d notice a pretty top-notch imaging speed for the first 2GB image file, but then this would plummet for the next two.

So what’s going on? In short –I don’t know .My wild guess is that that this is a Flash Drive has been contaminated with low-binned memory chips.

Whatever the reason though, for our purposes this drive is a reject.


My favourite get out clause when presented with results that don’t make sense is to attribute the effect to cosmic rays. When an effect is reproducible, it’s often worth taking the time to look a little into it and figure it out.

We found in the end that we had two issues with USB imaging which combined to make isolating a degradation in imaging performance a little tricky;

  1. The hot off the press high-capacity 64GB Kingston DT-101 drives being about half the speed of their 32GB equivalent
  2. At least one of our 8GB DT exhibiting some manufacturing flaw which affected only it’s performance after navigating more than 30% into the drive’s capacity.

As deployment times are critical in our environment, the only sensible solution is to benchmark our deployment drives with the same gusto that we apply to prospective user hardware. HD Tune is ideal for this;

  1. It benchmark’s the drive across the entire capacity, allowing us to see flaws which might not be revealed in small image deployment run
  2. It delivers results on USB Flash quickly (just a few minutes).
  3. Is easy to use, which allows testing responsibility to be devolved to the techs
  4. A free version exists for personal use

In summary, these problems slipped into our environment unnoticed as we assumed too much of our USB Flash Drives. We assumed a certain equality in flash drive performance (even from the same manufacturer/model range) that wasn’t necessarily there.

This quick investigation shows you can’t take anything for granted.  Unfortunately, speed class ratings (such as that available for SD cards) do not exist for USB flash drives. Getting data from the manufacturers on their drive performances is also not trivial, so this can become a simple case of buyer beware.

The only way to be sure, is to benchmark.


Thoughts for the Future

My ultimate aim is to move all the techs imaging drives to USB 3. Only these super-speed USB Flash devices can hope to saturate the USB 2 bus and provide the ultimate image delivery times across our estate.

Naturally, there is also a measure of future proofing here also; these super-speed drives will mesh nicely with the next generation laptops and desktops which will increasingly possess both USB 3 interfaces and Solid State Drives. Such combinations will permit imaging speeds in excess of 200MB/s which will surely be a wonder to watch.

0 Favorited
0 Files

Tags and Keywords


09-18-2012 07:12 AM

very nice article.

09-10-2012 02:01 PM

Nice article!  thank you for taking the time to write it up.

09-10-2012 12:08 PM


Welcome to the Connect forums! I did dig out some Sandisk flash drives for comparison testing a couple weeks after writing this, and indeed I found their speeds to be generally quite consistent at 25MB/s read as you indicate for the contour. Fine drives!

Kind Regards,



09-10-2012 10:04 AM

Hey Guys,

Nice article. Just wanted to add that I have been using a SanDisk Extreme® Contour™  for a few years now and the reason I bought it was because of the speed advantage advertised. I can say with confidence that there are quite a few products out there that have high capacity but very slow transfer speed.

 Here are details of the product from their site ( & no I do not work or get paid by them :) ) :

SanDisk Extreme® Contour™ - 32GB


High performance - - super-fast data transfer at up to 25MB/second* read and 18MB/second* write speeds



09-10-2012 09:46 AM

Thank you for the article.  I also came across this site which has its own benchmark and online database of results:

This benchmark does break down read and write transfer rates by block size, so if you deal mostly with small file sizes, you can cater your perusing to those results.  I think the site is good for separating the wheat from the chaff when searching for a new flash drive, though some of the results on there may have you scratching your head...

09-08-2012 04:51 AM

I actually format my USB drives as FAT32 as I use syslinux to boot my drives which does not support NTFS.

The reason though for splitting image files into 2GB chunks is however largely due to this being the default setting for imaging with rdeploy/ghost. I keep this setting (rather than expanding it to 4GB) as it means that when I perform an offline update to my images I don't have to resync the entire image -just the 2GB image files affected.

(the partition limit for FAT32 is 2TB and the maximum filesize is 4GB)

09-07-2012 04:19 PM

That's where my curiosity came in because in the article you made mention of 2GB file sizes and I thought, "Surely they're not using FAT16 and splitting the image file into 2GB segments."  Thanks for clarifyting.

Again, a well written and informative article.


UPDATE: changed "partitioning" to "splitting" to avoid confusion (since partitioning is a reserved word when discussing drives)cheeky

09-07-2012 04:12 PM

The USB drive must be NTFS formatted as FAT has Size limitation

09-07-2012 10:26 AM

This was an eye-opening article.  Thanks for sharing.  I'm curious, do you have the drives formatted as FAT or NTFS?

08-07-2012 06:33 AM

Just recieved our Kingston HyperX 3.0 64GB USB Flash drive. On a USB 2 port, the performance is pretty good at 32MB/s.

This is 2GB a minute, and this will be very snappy at delivering our images...



But what about it's true USB3 capability? For that, I'll plug it into a new Dell Latitude E6330 which posseses a lovely super-speed USB port..

And this is amazing; 250MB/s or to put is another way a rather amazing 15GB/min.

This creates a extreemely important value argument for spending an extra £80 per PC/Laptop to move forward with SSD. Build times can reduced by at least a factor of 10.

Just think of it -an at desk rebuild could take just a minute, the bulk of the time being the automation boot. Brilliant.

Kind Regards,


Related Entries and Links

No Related Resource entered.