Share the joy
I’ve been thinking this problem for a long time. HashMap brings an amortized O(1) time for insertion/lookup data. But let’s say we do rehash when we have million records already. We can’t neglect this rehash process, and say it is amortized O(1) time. This will lead a remapping disaster to whole server.
So, I found the solution is to use Consistent Hashing and Multiple Markers in Consistent Hashing. This will only result K/n keys to remapped on average, where K is the number of keys, and n is the cache slots.
This is a great post to explain consistent hashing: https://ihong5.wordpress.com/2014/08/19/consistent-hashing-algorithm/
Expand the storage by adding a new node: