Are Partitioning Applications Passé?
Last week I needed to resize the partitions on my hard disk so decided to use a non-destructive approach. I did intend to review the application I used, but found a few issues that will be resolved in the next major release, so I’ll wait until then.
Despite those issues, I did come to the realization that partitioning software may be be passé.
Windows used to need partitioning software as rebuilding your system was a major chore. But with the likes of Ghost, it became much easier to backup and restore your PC.
The same seems apparent for the Mac.
No one in their right mind would repartition a hard disk without backups of data. And if you don’t want to spend a weekend reinstalling and reconfiguring software (on the off chance that the non-destructive repartitioning fails to be non-destructive), a backup of your system is highly advisable as well.
However, if you’ve got a backup of your system, and a backup of your data, why not just go repartition destructively (using Disk Utility)? It’s certainly no slower than non-destructive repartitioning, which can be very slow.
I ended up going down the destructive path when I discovered I couldn’t merge two non-contiguous partitions with the software I was using. (That’s one of the issues to be addressed.)
Using SuperDuper! made my task mostly easier. I run a second (external) hard disk which receives a backup of my system partition every day. Being on OS X, I can easily boot from that backup. Most of my data lives on the system partition in my user folder; however, I keep my iTunes music and iPhoto images on a separate partition (called “data”) along with non-essential files like downloads.
After the failed attempt at repartitioning the non-destructive way, I booted into my backup system, repartitioned using Disk Utility, and then restored the system and data to the relevant newly resized partitions.
QED, really.
Under what circumstances would non-destructive partitioning really be an advantage? Only if you don’t have adequate backups, but you’ll be living dangerously.
It’s a tough situation for the developers of partitioning software. And it may get tougher.
Apple includes a non-destructive partitioning tool with Boot Camp. Currently this tool only allows you to repartition once, as it requires your system disk to be totally whole and it only creates an OS X partition and a Windows partition. It’s quite foreseeable that Apple could extend the functionality of this tool in Leopard to allow full repartitioning. And if that happens, you can kiss goodbye another segment of third-party software development.
Would Apple include such a powerful tool in OS X? For many users (who as owners of Macs do have admin access) it would be like giving a two-year-old a Bowie knife—there could be a few serious accidents before they get the hang of it.
My destroy and restore approach only produced two small problems. Many of Overflow’s items got scrambled, and I need to redefine my SuperDuper! backups as, although the partitions still have the same names, it seems to address them at a lower level with unique IDs.
From my experience, if I need to repartition in the future, I’ll go the “destroy and restore” route rather than “non-destructive.” But I’d be really interested to hear readers’ thoughts on this issue.
Hey—and it’s a good way to find out if your backups are working!
Comments
I think Apple crippled BootCamp partitioning purposefully and will not extend it. As a long-time fan of installing linux on Windows machines, after using FIPS, I found Ranish’s Partition Manager to be a great free app that I wish would work on hfs+ file systems. But Apple has to worry about users drifting to Linux, and developers creating Apple app (iTunes Store) functionality on Linux or NetBSD or OpenBSD. Apple already drags its heels on Darwin development and would like to keep a short leash on any alternative operating systems- otherwise they would have moved from away HFS and HFS+ and become more serious about supporting UFS.
In Windows, it’s possible for a partitioning program to be much faster than backup and restore if few files have to be moved. That is usually when the partition is being increased in size, or when there are few files in the space being taken out of the partition. This is the silver lining to the file system design (the gray cloud, of course, being the fragmentation problems).
For a long time I didn’t bother defragmenting Windows partitions for the same reason. It was usually faster to do a full copy to another partition and reformat the old one, if I had an empty partition spare.
I agree that partitioning software has become passe, but not for the reasons you mention.
I keep my iPhoto and iTunes files on a totally separate DRIVE. Hard drives have become so cheap that I can just pop down to my local big box store and pick up a USB drive, bring it home and plug it in in less time that it took you to write this article, let alone futz around with partitioning, destructively or not.
That’s the way to go, IMO. I keep nothing but applications on my system drive and a few random files in my document folder. Everything else gets stored on various drives that I can even take with me if I want (a feature that proved indispensable about two weeks ago on a nightmarish deadline).
Ah, Beeb, to have enough moolah to not even have to bat an eyelid at buying a new hard drive. But you are right, and that’s definitely the path I’d take if I could afford it.
Maybe when I get an income again I’ll be able to spalch out all the time. (Unless some hardware vendor wants to send me free hardware to review…)
Wait a sec, Chris. I thought everyone was using the Neiman Marcus diamond-encrusted 2TB hard drive from Seagate-Sothebys. Or am I completely out of touch?
lol. twooo terrra byyytes…drool drool.
There may be circumstances when you don’t have/need a backup of all the data on volumes being resized. For example, when I did some volume resizing a few years ago some volumes contained EyeTV recordings and other data that wasn’t backed up which was acceptably risky to lose if resizing failed. Any data on the volumes that I couldn’t afford to lose was backed up first.
Yeah, it’s a safe bet that SD! uses the volume UUID that can be found in a volume’s Info window in Disk Utility or by running diskutil info [volume-name] | grep UUID in Terminal.
I’m looking forward to ZFS making traditional volume resizing obsolete.
Has anyone seen actual performance gains partitioning?
I’ve read that as drives become full of thousands of files (easy to do with iTunes and iPhoto) that partitioning can make the drive a little faster because you don’t have one huge catalog file for the entire system.
Does it really help all that much?
For backup purposes, I’d almost alway recommend a separate physical device rather than a partition. Most drives I’ve had go bad in the past were due to mechanical failure and separate partitions would have done nothing to save my data.
I recently had an 80G go brick on me—and the S.M.A.R.T. status actually warned me this time around. That was the first time I’d ever seen the SMART status on a drive change before it died. Luckily, I didn’t have anything real important on it. It was an older drive I used as a spare.
I’m curious when you wouldn’t recommend it.
Obviously if someone only has a single drive then creating some kind of backup of important data is better than nothing. I’ve suggested using “Create Archive …” of important folders to a few friends I know that had no other form of backup. Unfortunately it was too late to help one of them who’d already deleted ~/Documents, but that loss was enough to convince him of the value of cloning his mini to a miniStack using SuperDuper!.
I’m also contemplating possible performance gains from partitioning, and my idea is, if you partition along with physical boundaries, that is the platters, which we have usually 3 or 4 on single drive. Each platter has it’s own head, so it can be just like we have 3-4 separate and completely autonomous* “drives”. So if you distribute your files smart, you could have each major task (OSX,CS3,Aperture,Fusion&XP;,etc) accessing their own “drive”.
And you can even turn it into the “RAID 0” with boost in performance. Although that would probably require some participation from vendors.
*Of course there is a bottleneck, they all sit on a single interface, but if it’s SATA, there should be plenty of bandwidth to serve them all simultaneously.
Soon I’m going to get myself some extra hardware, so could probably experiment with it. Though I know nothing of how to measure any gains possible yet.
But what you think for now: is it a pipedream, or does it has something ?