By deduplication performance, do you mean the deduplication percentage on the RPS? From what I understand about deduplication the performance greatly depends on the block size you set for dedup, as well as the source data you are backing up, not the RAM or SSD performance. Things like images, audio files, and video file do not dedup well since a minor change in the source data can change the entire structure of the file, making it seem like the whole thing is a different file.
Increase Deduplication Performance
I am looking for ways to increase performance on our deduplication datastore.
We currently use SSD drives to handle the hash tables which greatly reduces the need for RAM. Would we get better performance by dedicating more RAM to the deduplication store?
Here's an example of the current hardware we are using:
Supermicro motherboard with dual Intel E5-2603 processors
32gb 1333mhz memory (20gb dedicated hash memory)
Samsung EVO PRO 512gb SSD (for hash destination )
Adaptec RAID controller and 7.2k 4tb drives (obvious bottleneck)
We have tried using SSD caching which helps a bit but still need better performance.
My thoughts are to bump up the system memory to 128gb and dedicate 64gb or more to the database. Does the trade off with SSDs negate that?
Please sign in to leave a comment.
I assume that you are looking for ways to increase backup/restore(or read/write) performance, assigning more memory to HASH Role of the dedupe datastore does have a positive impact on performance as this would allow more HASH entries to be loaded into memory/RAM . Additionally having the Index location configured on faster disks would also help increase read/write performance.
Hope this help.
Thank you for replying. Ajay that is exactly what we are trying to do, increase read/write performance. I will try adding more memory and moving the Index path.