One year review of Code42’s CrashPlan+ Unlimited cloud storage service

About a year ago, I jumped into a 4-year digital marriage with CrashPlan+ Unlimited. Since then, CrashPlan has been quite instrumental in my backup ecosystem.

First of all, I signed up for a 4-year CrashPlan Unlimited account for ~$190.

This comes to about ~$4 per month.

I do not recommend jumping into a 4-year commitment with any service before trying them out first.

Don’t be impulsive like me. (In my defense however, CrashPlan offers a risk-free money back guarantee)

Here are the main selling points that convinced me to choose CrashPlan over other providers:

  • Linux/OS X compatibility
  • 448-bit encryption with the optional ability to upgrade security
  • Ability to restore individual file(s) over the web
  • Remote backups to other drive(s) or friends/family computers for free of charge
  • Forever retention of deleted files
  • Unlimited storage space with no limit on individual file size
  • Easy/set-it and forget-it interface on Mac OS X
  • Priority customer service with paid plans
  • Data de-duplication & sound technical backup process
  • The “single computer” plan allows you to attach external storage
  • Most recent file(s) get’s priority when backup engine is running over older files

Not too shabby. Especially for ~$4 per month.

At the time of this post, I have backed up around 700GB+ to CrashPlan’s servers and likely will have backed up several Terabytes of data by the time my 4-year subscription comes to an end.

Most of my data consists of irreplaceable audio, video and images. Generally it is recommended that one keeps 3 copies of backups in 3 different locations:

1.) With YOU

2.) In the Cloud (i.e. CrashPlan)

3.) Offsite (i.e. Grandma’s house etc..)

Unfortunately, I have not setup a (#3) offsite solution just yet. I am debating whether I should use CrashPlan to have encrypted backups in my offsite location as well but I am a bit hesitant of depending on the same provider for both my primary and secondary backup(s).


Here are some annoyances of using CrashPlan:

File Analysis & Scanning Too SlowIf I plug in my 1TB external drive, CrashPlan will take several hours just to analyze, scan and index all the files contained within the drive. This is expected, but, if I remove the drive and plug it back in, this process re-starts all over again and I am forced to wait several more hours for the analysis/scanning process to end before even backing anything up! An easy solution to this annoyance escapes me right now, but it would certainly be helpful if this process was dramatically faster in the future.

Initial backup takes foreverAgain, other than CrashPlan’s seeded drive solution, an easy answer escapes me. It took me about 3 weeks of continuous uploading to backup about 400GB of data.

The backup engine runs in Java – for fucks sake, this is annoying. If I plug in my 1TB drive, java swallows up so much RAM from my machine that sometimes I contemplate stabbing my eyes with a spork.

While I have restored a couple of dozen individual file(s) from CrashPlan, I must note that I have not done a full restoration of my data using CrashPlan’s mirror of my data. I wish there was a easy way to test data integrity without doing a full restore over the internet.

Overall, I recommend CrashPlan, especially for the Power User.

A better backup solution might come to market in the near future, but as of now, CrashPlan is certainly a strong choice.

When my data tops over 10TB+, I might have to look for another solution…perhaps something along the lines of Amazon Glacier. Until then, I will be using CrashPlan.

If you have any alternative recommendation(s) in any capacity, let me know.

Tags: , , , , , , , ,

  1. The initial backup taking forever is primarily due to the data deduplication feature. If you disable it (dataDeDupAutoMaxFileSizeForWan setting), the backup speed increases by a lot.

    In my case, initial backup was about 4TB. From weekly reports, the backed up data was: 25%, 40%, 50%, 65%, (disabled deduplication) 100%.


    1. I am not sure if I am ready to disable the data-deduplication feature. From the CrashPlan website: “..Data de-duplication prevents CrashPlan from backing up duplicate data when two or more files in your backup selection include the same information. This reduces the amount of storage space needed for your backup and speeds up the back up process….”.

      I have duplicate files and disabling the feature would waste more time re-uploading them.


      1. You essentially choose between re-uploading some data (my colo plan includes 20TB of data transfer on 1Gbps, so that wasn’t really an issue) and using your CPU to find duplicate data. Pretty much all of the data in my backup set were unique files (duplicate data being maybe ~10GB at most), so my CPU usage was constantly at 100% for no good reason. It would have likely taken me another month to backup rest of the data.


      2. Thanks for your post. I’ve been using CrashPlan for about 6 months. I’m using the free backup to my mother’s house. It works great. I seeded a USB drive at my house and then took it to her house. At first the two sites couldn’t “see” each other until I port forwarded local port 4242 to a common random external port at both ends. Then they connected fine. The de-duplication is awesome.


      3. So basically all of these complaints are not crashplan’s fault.

        It takes a long time to index the backup disk? Yes… Because when you remove the external hard disk and re-insert it, you dismounted the archive. Now Crashplan has to verify the integrity of the archive before doing another backup. Backup to removable storage is cludgy at best. You SHOULD be backing up to another PC on the network running CrashPlan with permanent storage, like an internal HDD. CrashPlan needs continuous and permanent access to its backup archive.

        It takes forever to do the initial backup…. No shit. That’s a result of your upload speeds, not CrashPlan, and THEY OFFER A SEEDING SERVICE to mitigate that issue. What more do you want? Magic?

        Java takes up a lot of memory… Dear lord you have no idea what is going on. Java is irrelevant to Crashplan’s memory consumption. You are again making the mistake of using an external hard drive and unplugging it constantly. CrashPlan needs to be a backup agent (which only needs a hundred megs of ram), and now also needs to be the backup SERVER and manage your enormous archive, which needs to be re-indexed and verified now because you unplugged it. For half of a terabyte of audio/video files and possibly millions of files, this takes RAM to organize it all efficiently. Again, what do you expect, magic?

        If you ran a separate computer to receive backups and handle this burden you wouldn’t be having any trouble.

        And as far as verifying integrity of the data is concerned…. CrashPlan does that continuously and automatically and makes repairs as needed. It is one of their big selling features and also the main reason it takes so long for the external hard drive to become available for backups after plugging it in.

        I’m still waiting to hear a legitimate complaint. You want to have your cake and eat it too


      4. Very poor tech support, actually pretty much no tech support beyond supplying a link to an irrelevant article on their website. Too good to be true often is just that.


      5. If you’re looking for a solid multi-site solution I would recommend checking out the Duracell Cloud products (for an all-in-one solution), or buy a CTERA appliance (or two, they are economic NAS devices) and work with one of their partners for the cloud storage, or go straight with CTERA+AWS (


      6. Re: memory usage. I don’t care if it is Java or not, memory usage by Crashplan is ridiculous. Brand new computer, backing up 400MB, Crashplan uses 380MB of RAM!


Leave a comment..

This site uses Akismet to reduce spam. Learn how your comment data is processed.