Use sstablelevelreset to gain space back if you are unable to run a compaction.

The process below uses sstablelevelreset and is only relevant for Cassandra version 2.2 and below. For versions greater than 2.2 we have the ability to do major compaction on a column family that uses Leveled Compaction Strategy (LCS) through nodetool.

Why use this procedure?

If you ever find yourself in a situation where you are running out of storage and you are unable to complete any compactions because of this, there is a workaround you may be able to employ, but only if you believe there is a significant amount of obsolete data at higher levels possibly due to old data deleted from the system where the tombstones are sitting at higher levels.  To do this we use sstablelevelreset.

General Rule of Compaction

The general rule for compaction is that you should have 50% storage for the data and then another 50% for compaction. But if you use Levelled Compaction Strategy on any of your large column families then you can use sstablelevelreset to gain some space back as you will only need 10x the size of your level 0 storage (under normal operations this would be 1.6GB, 10 sstables at 160mb each). In this case where you have space issues that would mean you would need 2x the size of your entire column family as all the sstables (more than 10) will be set back to level 0.

With LCS we use sstablelevelreset to reset all levels back to 0. This will have two results, it means that as there will be fewer levels any reads to the CF will be much quicker, and also any tombstones that were in higher levels will now be deleted permanently from your storage after compaction is run on Level 0.

In order for compaction to run on level 0, there need’s to be >= 10 sstables all of a size of 160mb (default size). In this case, you will have more than 10 sstables as you are resetting everything from higher levels so compaction is guaranteed to start. Any tombstones that were in previous high levels will be compacted out and the storage will be regained.

Check the different levels of your CF

nodetool cfstats [keyspace] [column_family]

You will get an output of which will include the following:

SSTables in each level: [1, 10, 23, 0, 0, 0, 0, 0, 0]

The first position is LEVEL 0, the second position is LEVEL 1 and so on.
This output shows a healthy distribution of this column family. If LEVEL 0 had greater than 10 sstables or LEVEL 1 had greater than 100 sstables then this would be an indication that we need to investigate further.

If we ran a sstablelevelreset on this CF the output would change to be as follows:

SSTables in each level: [34, 0, 0, 0, 0, 0, 0, 0, 0]

Process for setting sstablelevelreset to 0

Step 1)

Stop the Cassandra process

Step 2)

Set concurrent_compactors to 1 in Cassandra yaml.

vi $CASSANDRA_HOME/conf/cassandra.yaml
concurrent_compactors: 1

Step 3)

Set sstable level to 0

cd $CASSANDRA_HOME/tools/bin
./sstablelevelreset --really-reset [keyspace] [column_family]

Step 4)

Restart the cassandra process

Step 5)

As there is only a limited amount of storage to work with we do not want much compaction to happen when we are in this process, so we set disableautocompaction

nodetool disableautocompaction [keyspace]

Step 6)

Monitor space and compactions.

df -h
nodetool compactionstats

Step 7)

Once you are happy compaction has finished you can re-enable autocompaction and reset concurrent_compactors in the yaml file.

For more troubleshooting articles check out “How to handle corruption


Please enter your comment!
Please enter your name here