| admin@source_gridname:/usr/local/avamar/etc/>: iperf -c repl_target_grid -w 60k -t 30 -i 10 | ||||||||||
| ------------------------------------------------------------ | ||||||||||
| Client connecting to repl_target_grid, TCP port xxxx | ||||||||||
| TCP window size: 120 KByte (WARNING: requested 60.0 KByte) | ||||||||||
| ------------------------------------------------------------ | ||||||||||
| [ 3] local xx.xx.xx.xx port xxxxxx connected with xx.xx.xx.xx port xxxx | ||||||||||
| [ ID] Interval Transfer Bandwidth | ||||||||||
| [ 3] 0.0-10.0 sec 27.3 MBytes 22.9 Mbits/sec | ||||||||||
| [ 3] 10.0-20.0 sec 28.4 MBytes 23.8 Mbits/sec | ||||||||||
| [ 3] 20.0-30.0 sec 28.4 MBytes 23.8 Mbits/sec | ||||||||||
| [ 3] 0.0-30.0 sec 84.1 MBytes 23.5 Mbits/sec | ||||||||||
| This shows we have a intra-site bandwidth of roughly 23.8 Mbits/sec. It is generally accepted that Avamar replication can use around 60-80% of the total link capacity. If we use 70% as an average figure this gives us 16.6 Mbits/second. | ||||||||||
| We therefore have 16.6 Mbits/second = 7.47 Gigabytes/hr | ||||||||||
| With a backup window of 20 hours (current setting) we would be able to replicate up to 149.4 per day. | ||||||||||
| The client xxxx itself is replicating more than 100GB of new data per day. So we need to focus on the network. | ||||||||||
| Please check your network settings and let me know for any concerns. Thanks. | ||||||||||
Wednesday, March 7, 2012
Replication throughput
Avamar 5.0
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment