Double the killer, delete select all

(If the title doesn’t ring a bell, google it for some good-hearted laugh)

Today I would like to briefly discuss deletion.

Now, If you’ve been through any software interview lately, there are usually a lot of good and interesting questions about storing data. Should it be indexed? If so, how? Should it be SQL? no-SQL? How about cloud vs local storage? Replicated? Clustered?

It’s less often that deletion and deletion strategies are given the same depth of discussion. And yet, I’ve recently encountered a related challenge at work which has inspired this post. What’s a good deletion strategy and are all deletion strategies born the same?

Let’s have a deeper look at situations when data deletion is more than a triviality.

When all else fails, we can always revert to basics. CC0 / Pexels courtesy of Pixabay

Strategy #1

Our first deletion strategy for the day would be “Let someone else handle it”.

That sounds a bit like cheating, right? But much like some languages like to handle garbage collection for us, I think it’s fair enough. Not to mention it’s easier.

For example, some libraries’ approach to opening a temporary file in linux is to immediately delete its directory reference. Its inode count would have been 0, but we have an open file handle to it, so it’s not. When we drop that file handle, the inode count drops the zero and the filesystem layer handles deletion for us.

Another example of such a strategy is AWS S3 object expiration. AWS will handle object deletion for you, but there are limits on the number of rules being evaluated and it’s run at daily intervals. This is obviously not so well-suited to things you want to delete here and now.

Now, letting someone else handle it was fun alright, but sure has its limits with the use being so specific or with limitations to the service being offered at all.

So now it’s up to us to do the job. Next strategy.

Strategy #2

Our second deletion strategy for the day would be truncation. In essence, throwing away the entire container and letting someone else reap the resources that it contained. A container could be a filesystem, a database table or partition, an AWS EBS volume, or even an entire S3 bucket. One delete call and it’s gone for good, DoD deletion standards notwithstanding.

That sounds like cheating again, right? But as the adage goes — “win if you can, lose if you must, but always cheat”.

Truncation is almost always a whole lot faster than dealing with more pinpointed deletions. Its obvious downside is that we had better planned our container layout appropriately to begin with, or else we’re going to lose some data that we do want to keep. Such pre-planning might not be feasible for all scenarios.

But when it is… we get to enjoy this joy:

# it has always been my dream to just delete Kubernetes. I finally get to live it out.# for comparative purposes:$ time rm -rf kubernetesreal 0m1.177suser 0m0.118ssys 0m0.863s# now using a container filesystem:$ dd if=/dev/zero of=myfs count=10240000 bs=2048$ sudo mkdir /mnt/test$ sudo mkfs.ext4 /mnt/test$ sudo mount -o loop myfs /mnt/test$ sudo cp kubernetes /mnt/test -R$ sudo umount /mnt/test$ time rm -f myfsreal 0m0.379suser 0m0.000ssys 0m0.218s

Okay, okay. Enough playing around. Now it’s definitely time for some real legwork.

Or is it yet?

Strategy #3

Our third strategy is “rendering useless”. While not deleting per se, it’s sometimes a sufficient equivalent, allowing us to delay the actual delete.

Cheating yet again? Don’t mind if I do!

The generally accepted way of rendering something useless is encrypting it from the get-go, then deleting its encryption key. The catch here is that aside from the advance preparation, we need a sufficiently unique key such that it’s not used for other data as well. A bit like the container example above, if you will.

Limits as limits are, this won’t help much when your goal is lowering your storage costs or making room for other data, though.

Okay, time for some real legwork. No more fooling around.

Actually deleting the thing

Now, classic delete strategies have sufficient nuances to warrant a post of their own! But I’d like to briefly discuss what I believe is the key idea.

And what’s a better way to start than the ubiquitous, well-known, and loved (accidental whole disk deletes notwithstanding), “rm -rf”.

Let’s have a look.

# # It’s totally random that I’m deleting kubernetes again. honest!# strace -c rm -rf kubernetes/% time seconds usecs/call calls errors syscall — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — 36.55 0.179019 6 28501 unlinkat14.97 0.073310 2 26376 fcntl12.46 0.061001 2 21107 fstat11.00 0.053855 3 15829 getdents6410.26 0.050264 3 16035 close8.64 0.042337 3 10766 8 openat5.65 0.027678 5 5278 lstat

Yikes! We asked rm to delete an entire folder for us, and it looks like only ~35% of the time spent doing actual work, was used for deletion per se. Where did the rest go?

Turns out it went for seeking the entries we wished to delete (getdents64), testing if they happened to be new subdirectories (fstat) and then recursing into them (openat).

Could we have saved those? If we had known the exact indexes and types of the resources in that subtree, we could. Perhaps a simpler indexing scheme (or no indexing at all) would have made it easier for us.

Now, deletes also have another undesirable way of interacting with indexes. If we delete an item that’s indexed, the index has to be updated to match. If we’re deleting a bunch of them, we may end up paying the re-indexing cost for each delete operation.

Yes, filesystems are included.

(Now do you see why I hate legwork?)

Summary

As much as I’m tempted to get carried away here, I suppose I’ll have to summarize the discussion with ‘understand the linking structures of your data and try to optimize your deletion procedure in advance if possible.’

Or cheat.

Now if you’ll excuse me, I have some leftover kubernetes folders I seriously need to see go ‘poof’ in smoke.

A Gil, of all trades. DevOps roles are often called “a one man show”. As it turns out, I’m not a man and never was. Welcome to this one (trans) woman show.