Home > Magento > Benchmarking Zend_Cache backends for Magento

Benchmarking Zend_Cache backends for Magento

October 3rd, 2011

The Zend_Cache module from the Zend Framework is a nice piece of work. It has a slew of programmer-friendly frontends and a respectable set of backends with a well-designed interface. I love the a-la-carte approach, but I am only really interested in the Zend_Cache_Core frontend and the backends that support tagging since that is what is required by Magento. This begs the question, which backend should you use? While I have my own opinion on that matter (ahem, Redis. -post coming soon-ish), I wanted a reliable way to test Zend_Cache backend performances so I wrote a benchmark! This benchmark was both forked from and inspired by the benchmark found in Vinai Kopp’s Symlink Cache. It uses Magento’s core/cache model rather than Zend_Cache_Core directly so a Magento (or Magento-lite) installation and bash are the only requirements.

The purpose of this post is not to provide a bunch of cache backend benchmarks, but rather to simply introduce my benchmark code in the hopes that others conduct their own tests and hopefully publish their findings. A link to this post is appreciated. Also, if there are any criticisms of the benchmark I’d love to see a pull request. :)

The benchmark suite is fully-featured:

  • Repeatable tests. Dataset is written to static files so the exact same test can be repeated, even with entirely different backends.
  • Test datasets can easily be zipped up and copied to different environments or shared for others to use.
  • Can relatively easily test multiple pre-generated datasets to compare different scenarios on the same hardware.
  • Uses true multi-process benchmarking, each process with a different set of random operations.
  • Flexible dataset generation via options to init command. Cache record data size, number of tags, expiration, popularity and volatility are all randomized.

Currently the benchmarks are run via the command line so testing the APC backend or any others that only work via a cgi or apache module environment will not work. This could be remedied easily enough with the use of CuRL and some php copy/paste if you had the desire to test on your actual web server.

Here is an example run using the Redis backend using my dev environment, a Lubuntu VirtualBox guest:

Cache Backend: Zend_Cache_Backend_Redis
Loading 'default' test data...
Loaded 10000 cache records in 29.1080 seconds. Data size is 5008.9K
Analyzing current cache contents...
Counted 10023 cache IDs and 2005 cache tags in 0.2062 seconds
Benchmarking getIdsMatchingTags...
Average: 0.00036 seconds (36.82 ids per tag)
Benchmarking 4 concurrent clients, each with 100000 operations...
4 concurrent clients completed in 62 seconds

         |   reads|  writes|  cleans
Client  1| 1811.83|  184.66|    6.81
Client  2| 1799.84|  165.29|    6.91
Client  3| 1818.90|  165.17|    6.79
Client  0| 1790.91|  153.56|    7.40
ops/sec  | 7221.48|  668.68|   27.91

The important numbers to look at are the summed ops/sec. Given the three variables: dataset, hardware and backend, it is easy to change just one of these without affecting the others so this benchmark can be used to test any one of the three variables reliably. The three metrics observed are reads, writes and cleans. The first two are pretty self-explanatory. The third is a clean operation on a single tag using Zend_Cache::CLEANING_MODE_MATCHING_ANY_TAG which is the only mode Magento ever uses other than Zend_Cache::CLEANING_MODE_ALL for manual cache refreshes. Individual read/write operations are very fast so given the large number of operations in a test I did not feel the need to examine min, max, average, or standard deviations.

The test uses (hopefully) sane defaults for dataset generation parameters, but there is plenty of flexibility. I advise you to examine your production environment (number of cache keys, number of cache tags, number of concurrent clients) to tweak the test to more closely match your own environment. Here is the output of the --help cli parameter:

$ php shell/cache-benchmark.php --help
This script will either initialize a new benchmark dataset or run a benchmark.

Usage:  php -f shell/cache-benchmark.php [command] [options]

  init [options]        Initialize a new dataset.
  load --name <string>  Load an existing dataset.
  clean                 Flush the cache backend.
  tags                  Benchmark getIdsMatchingTags method.
  ops [options]         Execute a pre-generated set of operations on the existing cache.

'init' options:
  --name <string>       A unique name for this dataset (default to "default")
  --keys <num>          Number of cache keys (default to 10000)
  --tags <num>          Number of cache tags (default to 2000)
  --min-tags <num>      The min number of tags to use for each record (default 0)
  --max-tags <num>      The max number of tags to use for each record (default 15)
  --min-rec-size <num>  The smallest size for a record (default 1)
  --max-rec-size <num>  The largest size for a record (default 1024)
  --clients <num>       The number of clients for multi-threaded testing (defaults to 4)
  --seed <num>          The random number generator seed (default random)

'ops' options:
  --name <string>       The dataset to use (from the --name option from init command)
  --client <num>        Client number (0-n where n is --clients option from init command)
  -q|--quiet            Be less verbose.

To handle multi-process benchmarking the test is actually launched from a shell script which backgrounds each client and sums the results using awk so unless you are doing single-process benchmarks you never need to invoke the ‘ops’ command yourself.

Give me the code already!

The code is hosted at github.com/colinmollenhour/magento-cache-benchmark. If you use modman you can install it like so:

modman clone cachebench git://github.com/colinmollenhour/magento-cache-benchmark.git

Or, you may also download it directly and just extract cache-benchmark.php to the “shell” folder in your Magento installation.

Run a test!

Assuming you’ve cloned/downloaded the code already, here is how you run your first test:

php shell/cache-benchmark.php init
bash var/cachebench/default/run.sh

Could it get any easier?

PS. I included a “Null” backend which is just a black hole for the purpose of getting a general idea of your PHP overhead.

Magento , ,

  • Anonim

     Can your backend be used without magento?  Like normal Zend_Cache backends ?

  • http://colin.mollenhour.com Colin Mollenhour

    Yes, absolutely.

  • Anonymous

    Hi Colin, it looks like a promising tool, thanks for sharing! I’m really looking forward to your post about Redis as I want to convince my customers to switch over from memcache. Keep up the good work. 

  • Sanjay

    Hi Colin, a little bit of background first – our Magento installation is split into Admin and Frontend being on different servers. We installed your tool on the frontend and it works really good. The redis server had been setup on the frontend server, we then configured the Admin to talk to frontend Redis but when we access the admin url, we get the error “invalid server response: e_1>0Alt_Image”, is there any direction you can recommend for solving this?


  • http://colin.mollenhour.com Colin Mollenhour

    Hi Sanjay, this sounds like an issue with the driver and corrupt cache records. Are
    you using the phpredis (compiled) mode or the standalone mode? Please open an issue on the github project to continue discussion. -Thanks

  • Alexandre Piel

    Are you using Redis for the fast backend only or for slow and fast backend?

  • http://colin.mollenhour.com Colin Mollenhour

    I use the Redis backend by itself, no two-levels. While you could use it as either the fast or slow (or both) in a two-levels configuration, in most cases it won’t make sense to do so as the tag support built in uses Redis’ efficient data structures to support tagging much more efficiently than the two-levels implementation.

  • saho

    Do you currently know of any hosts that support Redis. We have a sip400 dedicated server from nexcess and they say they cannot support this. Its too bad because we have around 100k products and our add to cart functions are so slow. They said they set this up with another client and saw no noticeable difference in speed which is hard to believe with all the good things I am hearing about it.

  • http://colin.mollenhour.com Colin Mollenhour

    I don’t know much about Magento-specific hosting companies, just VPS and dedicated hosts. There was a short period where I had pulled in some changes that added more garbage collection but it was very detrimental to performance and not necessary so I removed it. I’m sure if Nexcess were to test the current version with a real benchmark they would see a huge difference in tag cleaning performance. Memcached is probably comparable for read performance though.

  • http://www.greengecko.co.nz/ Steve Holdoway

    If you add

    apc.enable_cli = 1

    to your php configuration, then you can then use APC from the command line. This will make the benckmarks a lot more relevant.

  • http://colin.mollenhour.com Colin Mollenhour

    Enabling APC on the cli actually hurts performance since APC caches opcode per-process. This works for e.g. apache because the cache lives on in the apache parent process between requests, but for CLI the cache is re-built for each process and lost at the end of the process. apc.enable_cli is only for debugging purposes for APC developers and should never be used in production.

  • http://www.greengecko.co.nz/ Steve Holdoway

    Well, technically *any* process attached to a shared memory segment will keep it alive – it’s not *in* any process – last one out turns off the lights.

    If you use memory mapped files ( and who wouldn’t – saves all that pesky kernel reconfiguration! ), then the segment is private. It will not affect any currently running processes, and your benchmarks will be more relevant.
    $ php –info | grep apc.mmap_file_mask
    apc.mmap_file_mask => /tmp/apc.zUiMm1 => /tmp/apc.zUiMm1
    $ php –info | grep apc.mmap_file_mask
    apc.mmap_file_mask => /tmp/apc.V0wyAu => /tmp/apc.V0wyAu

    Or is apc just generating an unique file descriptor to the existing segment and fooling me into thinking it’s private? It’s easily done…

  • http://colin.mollenhour.com Colin Mollenhour

    Steve, I’m not following.. Your example proves that the processes don’t share APC caches. If you ran the benchmark on the CLI with APC each process would be hitting an empty cache and actually have a performance advantage because of it and therefore the results would be meaningless. APC is really designed to run within mod_php or php-fpm.

    I’m sure the script could be modified to run off of your web server with some work. However, I’m not really concerned with using APC as a cache backend because what eventually happens is the user cache starts filling it up and evicting the opcode cache. APC is a great opcode cache but everything starts to suffer if you use APC for both opcode and user cache unless you just give it a ton of memory. Right now my system uses up to 150Mb for the opcode cache. To run the user cache on APC without risking the user cache evicting opcode cache I’d have to give APC at least 600Mb and that just seems to grow as new features are added or if the cache hasn’t been cleared in a long time. Using Redis with compression enabed for records over 20kb my user cache uses no more than 128Mb.

  • Matt Developer

    What are the numbers people are getting for the benchmark ?

  • http://colin.mollenhour.com Colin Mollenhour

    I too would be interested in seeing some numbers that other people are getting. You can also see my “Magento Cache Showdown” that was presented at the 2012 Imagine conference here: http://goo.gl/NDXan

  • Xseasy Xseasy

    Thanks a lot for your great Redis extension, my results:
    Cache Backend: Cm_Cache_Backend_RedisLoading ‘default’ test data…Loaded 10000 cache records in 6.31 seconds (6.1118 seconds cache time). Data size is 5011.9KAnalyzing current cache contents…Counted 10045 cache IDs and 3 cache tags in 0.0411 secondsBenchmarking getIdsMatchingTags…Average: 0.00030 seconds (12.00 ids per tag)                                    Benchmarking 4 concurrent clients, each with 100000 operations…4 concurrent clients completed in 17 seconds         |   reads|  writes|  cleans————————————Client  1| 7046.66| 1647.28| 1467.66Client  2| 6917.84| 1599.31| 1559.90Client  3| 6388.22| 1407.96| 1527.16Client  0| 6298.69| 1450.43| 1289.70————————————ops/sec  |26649,00| 6103,00| 5842,00

  • craig.carnell

    I compared our site using redis and without. Using the benchmark I saw much less ops/sec with redis but the benchmark finished in much less time? Is higher ops better? or lower?

  • http://colin.mollenhour.com Colin Mollenhour

    Higher ops/sec is better. I don’t know what you were comparing Redis to, but if it was the default “file” backend then you probably saw faster reads and writes but not cleans. With the file backend the cleans are so slow that Redis still finishes in less overall time. See my presentation at Imagine 2012: http://goo.gl/NDXan

  • craig.carnell

     Now that I have a little bit more of a understanding – I have repeated my benchmarks.

    I am seeing much better cleans using redis instead of file, reads/writes are down by quite a lot. The benchmark completed in 30 seconds for redis and 90 seconds for file.

  • Anonymous

    Hi Colin,

    Congratulations for your excellent work. We are using your cache benchmark and your alternative zend cache (cm_cache_backend_file). We noticed huge improvements when using cm_cache_backend_file as single level cache. But in 2level mode with memcached (memcached as fast backend and cm file as slow backend) writes and cleans drop drastically in comparison to memached+zend file.

    memcached+zend file: 53000 reads,7200 writes,5 cleans (51 seconds to complete)

    memcached+cm file: 45000 reads, 119 writes, 2 cleans (118 seconds to complete)

    What could be the cause? And, in terms of performance on a single server, would you rather recommend cm file as a single cache vs 2 level (unfortunately redis or varnish are not an option at this point)?

  • http://colin.mollenhour.com Colin Mollenhour

    Cm files does have additional overhead for writes with each tag so if using a lot of tags the write performance will be affected quite a bit. But, the tag cleaning should still be extremely fast so something seems amiss.. Make sure you test with a number of tags that is representative of real-world (use the analyze feature). If you have keys with over about 50 tags then you are probably choosing tags incorrectly. I think Enterprise Edition uses a ton of tags for the FPC..

    If you are on a single server I very strongly recommend you just use cm files by itself as reads writes and cleans will all be much much faster than memcached+zend file. For example (numbers from my Imagine presentation):

    zend file:  20672 reads, 7415 writes, 1.3 cleans

    memcached+zend file:  16494 reads, 2877 writes, 1.2 cleans

    cm file:  52008 reads, 4198 writes, 3391 cleans

    Init for above test:
    $ php shell/cache-benchmark.php init –name basic –clients 4 –ops 30000 –seed 1 –keys 20000 –tags 5000 –min-tags 1 –max-tags 10 –max-rec-size 32768

  • Anonymous

    Thanks for the clarification. We switched from memcached+file to cm file only, these are our results: 66000 reads, 4970 writes, 2981 cleans, indeed a huge improvement.

  • Anonymous

    I’m building up a new large instance at Amazon Sydney, so I thought I’d try this benchmark out to compare / contrast caching options:

    Cache Backend: Cm_Cache_Backend_Redis

             |   reads|  writes|  cleans
    Client  2| 3793.39| 1032.68| 1169.74
    Client  1| 3782.76|  917.90| 1073.77
    Client  3| 3770.17| 1000.27| 1120.16
    Client  0| 3762.28|  963.32| 1121.60
    ops/sec  |15108.60| 3914.17| 4485.27

    Cache Backend: apc (Zend_Cache_Backend_TwoLevels) + Zend_Cache_Backend_File

             |   reads|  writes|  cleans
    Client  2| 2945.96| 1132.23|    0.42
    Client  0| 3069.90| 1324.50|    0.44
    Client  1| 3208.46|  419.12|    0.46
    Client  3| 3362.47| 1428.18|    0.48
    ops/sec  |12586.79| 4304.03|    1.80

    Cache Backend:  (Zend_Cache_Backend_File)

             |   reads|  writes|  cleans
    Client  0| 3368.40|  966.49|    0.46
    Client  3| 3058.01|  925.40|    0.47
    Client  1| 3125.76|  503.65|    0.49
    Client  2| 3302.70|  869.07|    0.50
    ops/sec  |12854.87| 3264.61|    1.92

    Cache Backend:  (Zend_Cache_Backend_File) – on a tmpfs system

             |   reads|  writes|  cleans
    Client  1| 3314.74|  703.63|    0.47
    Client  2| 3263.86| 1304.20|    0.48
    Client  3| 3483.76| 1785.13|    0.49
    Client  0| 3580.58| 1365.69|    0.53
    ops/sec  |13642.94| 5158.65|    1.97

    Following on from this, I butchered the code to run via curl, as this is a nginx/php-fpm/APC config. Results are interesting to say the least!

    Cache Backend: Cm_Cache_Backend_Redis

             |   reads|  writes|  cleans
    Client  0| 3010.40|  746.27| 1055.88
    Client  2| 3023.13|  895.68| 1209.38
    Client  3| 3038.09|  851.76| 1325.70
    Client  1| 3045.78|  835.61| 1408.92
    ops/sec  |12117.40| 3329.24| 4999.88

    Cache Backend: apc (Zend_Cache_Backend_TwoLevels) + Zend_Cache_Backend_File

             |   reads|  writes|  cleans
    Client  1| 9790.57|  511.39|    0.38
    Client  2|10256.64| 1575.13|    0.38
    Client  4|12081.83|  904.16|    0.38
    Client  3|10141.82| 1069.05|    0.38
    ops/sec  |42270.86| 4059.73|    1.52

    Cache Backend:  (Zend_Cache_Backend_File)

             |   reads|  writes|  cleans
    Client  4| 2791.41|   24.35|    0.43
    Client  2| 2847.46|   12.08|    0.44
    Client  1| 2849.35|   12.21|    0.44
    Client  3| 2651.38|  159.00|    0.42
    ops/sec  |11139.60|  207.64|    1.72

    Cache Backend:  (Zend_Cache_Backend_File) – on a tmpfs system

             |   reads|  writes|  cleans
    Client  2| 2930.62|  920.78|    0.43
    Client  1| 2895.48|  731.65|    0.42
    Client  3| 2933.71|  800.37|    0.42
    Client  4| 2942.36|  689.07|    0.42
    ops/sec  |11702.17| 3141.87|    1.69

    Apart from showing up the improvement of a memory backed filesystem, especially wrt write performance, this does show up a huge difference in cache cleans between redis and, well everything else.

    The other startling thing is that APC seems to be about 3 x faster than all other read processes ( at least when just started! ) when run through the web server – and in my view that’s where the performance is really needed for general website performance.

    Please note, I have put these results up as a talking point, no more. I’m currently mystified with these results, and haven’t looked at the myriad tunables on the server, and their effect, or whether curl was an added restriction. I haven’t had time to run the tests repeatedly – I just took advantage of a pretty clean machine.



  • http://colin.mollenhour.com Colin Mollenhour

    Nice work on the benchmarks, Steve! Thanks for sharing. If your curl-based solution is clean enough, please submit a pull request on github so others can use it as well.

    It appears that Amazon’s disk performance is sub-par (as I guess we all knew) since your Redis numbers are similar to mine but the disk numbers are considerably lower (my disk system was 2x 10k RPM SAS in RAID 1).

    If you are comfortable with using APC as your backend then I’d suggest trying out APC + simplified two-levels[1] + Cm_Cache_Backend_File[2] so that you can still get decent write and clean performance. Also Cm_Cache_Backend_File has MUCH faster reads and cleans (with slightly slower writes as number of tags increases). I personally like APC as a opcode cache and don’t want it contending for memory and also want cron jobs to be able to clean cache tags if needed.

    On using curl, it shouldn’t really be a restriction at all since the number of cache backend requests per http request is so high that the overhead of a few curl requests is negligible. Just make sure that you have enough fpm workers to handle all requests concurrently.

    [1] https://gist.github.com/2199935
    [2] https://github.com/colinmollenhour/Cm_Cache_Backend_File

  • Anonymous

     Hi Colin, you’re welcome!

    I’m very happy with APC as an opcode cache, but find it seems to need it’s hand holding when used as a FE cache. As I’m all for an easy life, I’d rather have the reliability of redis – even more so if the performance can be improved further.

    For the code, I just modded a copy of shell/abstract.php so _parseArgs also added to $this->_args ( var->val or true if no val ), and disabled the block on running it via the web. A copy of your cache-benchmark to call that one, then the run.sh modded to call

    curl "http:///shell/cache-benchmark2.php?ops&name=default&quiet&client=$1" >>$results 2>/dev/null &

    ( and so on ).

    bit too messy to publish I think.

    I did try redis with tmpfs backed storage, but that made no difference, sadly. I didn’t expect it to, but you never know!

    To continue the benchmarking with your suggestions,

    Cache Backend: Cm_Cache_Backend_File

             |   reads|  writes|  cleans
    Client  3| 6368.37|   31.32|   30.33
    Client  1| 6369.64|   22.31|   32.83
    Client  2| 6479.67|   14.81|  148.38
    Client  0| 6431.70|   14.75|  163.39
    ops/sec  |25649.38|   83.19|  374.93

    (via curl)

             |   reads|  writes|  cleans
    Client  3| 6392.68|   12.60|  260.76
    Client  1| 6453.14|   12.41|   70.64
    Client  2| 6447.38|   14.02|   61.07
    Client  0| 6645.57|   11.86|  361.68
    ops/sec  |25938.77|   50.89|  754.15

    Cache Backend: Cm_Cache_Backend_File ( tmpfs backed )

            |   reads|  writes|  cleans
    Client  1| 6110.14|  618.35|  475.49
    Client  0| 6157.82|  599.61|  531.42
    Client  3| 6089.99|  504.27|  645.29
    Client  2| 6062.66|  516.35|  530.48
    ops/sec  |24420.61| 2238.58| 2182.68

    (via curl)

             |   reads|  writes|  cleans
    Client  0|16362.19| 6384.02| 3693.33
    Client  2|15977.04| 1912.96| 3033.02
    Client  1|15844.31| 6314.13| 8429.89
    Client  3|16488.61| 3142.54| 3942.89
    ops/sec  |64672.15|17753.65|19099.13

    That was  so startling, I repeated it a few times!

    With the your simplified 2 levels code installed, I set db up as slow, apc as fast.

    Cache Backend: apc (Zend_Cache_Backend_TwoLevels) + database

             |   reads|  writes|  cleans
    Client  0|28878.25|  187.02|  203.54
    Client  2|27260.50|  222.54|  236.22
    Client  1|28523.49|  213.28|  256.13
    Client  3|24748.40|  269.12|  251.64
    ops/sec  |109410.64|  891.96|  947.53

    (via curl)

             |   reads|  writes|  cleans
    Client  2|19708.29|  270.99|  282.95
    Client  3|19289.06|  252.97|  253.69
    Client  0|19365.14|  226.09|  234.98
    Client  1|19043.10|  223.48|  245.43
    ops/sec  |77405.59|  973.53| 1017.05

    Just to see what happens, I tried Cm_Cache_Backend_File + your simplified Zend_Cache_Backend_TwoLevels, but it seems it doesn’t implement the Extended Cache Interface. Shame!

    So I’m in a bit of a quandry as to which way to go… it looks like your File backend on a memory backed file system is going to provide best bang for buck without the unreliability of using APC as a front end cacher.

    Thanks for your amazing work!


  • http://colin.mollenhour.com Colin Mollenhour

    Wow, Cm_File with tmpfs is blazing fast! The only explanation I can think of for Cm_Cache_Backend_File being faster via curl than via cli is that PHP’s stat cache which is shared via a parent process would have a better hit rate via curl.

    However, even with that blazing fast speed I’d still be weary of using tmpfs since there is always a risk that a bug or exploit fills up tmpfs’ allocated space which would cause your system to swap and then eventually you’d get write errors and you may even invoke the OOM killer. I suppose a cron could try to free some space or setup an alert for low memory conditions, but using a real disk or Redis with volatile-lru eviction policy is “safer”.

    Again, AWS disk performance proves to be terrible compared to a bare-metal server. So if you’re on AWS I’d say Redis or tmpfs, but with a fast disk I think Cm_File on a real disk is a good balance between rock-solid stability and performance.

    Note that APC via CLI is not a valid test since each process will start with an empty cache, hence the reads will be faster since there is no data returned and I would think cleans would be faster since there are no keys or tags to clean…

    Thanks again for sharing your benchmark results!

  • Anonymous

    We don’t have a huge number of options down here ( Amazon only arrived
    across the ditch a month or two back ), VPS solutions tend to be to
    limited in resource, and dedicated servers outrageously priced, so we are to some
    extent playing on a different field to most other locations.

    This is why I’m not too bothered with disk performance… if I can keep everything in memory, then at most it affects the startup time. Well, in this perfect world we all live in (:

    I have a site that generates huge numbers of minute cache files ( entertaining import process! )… after a million or two, performance gets really slow, as you can imagine. A cronjob based on find -mmin can be used to trim the count by removing the oldest files, as will as controlling capacity. The ability to resize a tmpfs partition on the fly is also useful, if – as you say – there is enough memory available.

    **Your knowledge of the Magento/Zend caching mechanism is far greater than mine, so please comment if this is a risky strategy.**

    That’s this lazy sysadmins approach anyway!

    Thankyou once again for your work.

  • Andy Bird

    Hi Colin,

    Just doing some testing..

    Cache Backend: Apc (Zend_Cache_Backend_TwoLevels) + Memcachedops/sec |60419.69|16282.42|55643.57

    Cache Backend: Cm_Cache_Backend_Redis
    ops/sec |47184.50| 8597.43| 8600.63

    does that sound right?

  • http://colin.mollenhour.com Colin Mollenhour

    Two problems: APC cannot be tested accurately with the current benchmark code (since APC can’t share memory between CLI processes) and APC+Memcached is not a valid combination since Memcached doesn’t support tagging.

  • Andy Bird

    OK great. thanks for getting back

  • Andrew

    Hi Colin,

    I am using your Cm_Cache_Backend_Redis and it works great. Thank you so much, stella work!

    I’m just a little confused about the correct configuration (local.xml or other) for a multiple server setup. I have looked at the Redis documentation and can’t seem to find the answer.

    I have 2 front facing web servers on an LB connected to a single DB. Can I link all three servers to one Redis?

    I have Redis on each server at the moment. Would just like them all using the same Redis pool so I can then benchmark and see my site fly.

    I hope to hear from you soon. Many thanks again – keep up the good work.


  • http://colin.mollenhour.com Colin Mollenhour

    You should definitely have them all using the same Redis instance. Otherwise, when a cache tag is cleared on one instance it will not be cleared on the other instances.

  • Andrew

    Many thanks for your response Colin,

    So I’m assuming that the local.xml will look roughly the same as a Memcached setup where the servers are defined in the tags but with each servers db defined the same?

    Many thanks again,

  • http://colin.mollenhour.com Colin Mollenhour

    Yes, they should all point to the same server but will obviously not work so assuming your servers are on some sort of private LAN like 192.168.1.x you need to use the private LAN IP.

  • Andrew

    Thanks again Colin,

    I have the two front facing servers talking to a single redis instance on the db server now.

    The results aren’t that great though, not sure if I should set Redis up on the two front facing servers and make them SLAVES of the db master? Although I believe if I did this the MASTER would never get any new cache data as it is not serving Magento and then the SLAVES in turn would not get any cache information either.

    Any thoughts on this?

    Here are my results at the moment with Cm_Cache_Backend_Redis from one of my web servers:

    Loaded 10000 cache records in 48.7164 seconds. Data size is 5051.6K
    Analyzing current cache contents…
    Counted 10598 cache IDs and 2082 cache tags in 0.1044 seconds
    Benchmarking getIdsMatchingTags…
    Average: 0.00168 seconds (36.11 ids per tag)
    Benchmarking 4 concurrent clients, each with 100000 operations…
    4 concurrent clients completed in 112 seconds

    | reads| writes| cleans
    Client 2| 904.44| 218.30| 469.97
    Client 0| 906.29| 211.21| 399.92
    Client 3| 904.23| 228.57| 355.21
    Client 1| 900.74| 200.34| 482.98
    ops/sec | 3615.70| 858.42| 1708.08

  • http://colin.mollenhour.com Colin Mollenhour

    Sounds like maybe you are using a public network and not a private network. Make sure you have a *gigabit* private LAN between all servers and that you are using the private LAN IP in the local.xml config and not the public IP. If you can’t get a private gigabit LAN with your host then I suggest you find a new host as bandwidth will be a major bottleneck. With a private gigabit LAN your results should be very close to when using

  • Andrew

    Hi Colin,

    Thanks again for your reply. They should definitely be on a gigabit LAN and the ping doesn’t seem too bad at an average of 1.412ms.

    I have found that MAXMEMORY is my friend :) Increased it and now Magento is flying although the new benchmark doesn’t reflect this surprisingly – see below:

    Analyzing current cache contents…
    Counted 10138 cache IDs and 2057 cache tags in 0.0953 seconds
    Benchmarking getIdsMatchingTags…
    Average: 0.00194 seconds (36.42 ids per tag)
    Benchmarking 4 concurrent clients, each with 100000 operations…
    4 concurrent clients completed in 148 seconds

    | reads| writes| cleans
    Client 3| 680.77| 217.67| 274.30
    Client 2| 679.71| 215.23| 283.08
    Client 0| 680.10| 214.09| 225.64
    Client 1| 678.86| 218.13| 255.33
    ops/sec | 2719.44| 865.12| 1038.35

    Weird – it is definitely faster though.

    Many many thanks for your help again – greatly appreciated.


  • http://colin.mollenhour.com Colin Mollenhour

    Yes, too low maxmemory will definitely do it. :) However, note that my ping between two dedicated servers is avg 0.138 ms so 10x faster than yours..

    Protip: Monitor memory usage and evictions with the munin plugin mentioned in the README.

  • Andrew

    Hi Colin,

    Thanks again – have done that. See attached – all look good?

    Also, our system log on one web server shot to 40GB this morning with this little line of goodness for 20 minutes:

    Notice: fwrite(): send of 78 bytes failed with errno=32 Broken pipe in /******/******/******/*****/lib/Credis/Client.php on line 663

    Do you reckon changing might fix this?

    Also, can you send me your PayPal email so I can send you a beer? Your help has been great! :D


  • http://colin.mollenhour.com Colin Mollenhour

    The broken pipe shouldn’t have anything to do with connect retries, but I don’t know what is causing it.. I don’t see why reads should be timing out either so I don’t think adjusting that will help.. I’d try disabling persistent connections; they haven’t been rigorously tested in standalone mode and I wonder if they are breaking between requests. Your hit rate looks oddly low, mine is *always* upper 90s; not sure if that means anything.. Please open an issue on github to continue this dialog.

    I’ll take you up on that beer if you ever happen to pass through Knoxville. :)

  • http://www.magentodevelopments.com/ magento development india

    The objective of this publish is to propagate knowledge about the screws and equipment of the Magento storage space storage space cache system among designers, and to discuss one method of conquering some restrictions of the computer file storage space storage space cache storage space category. This article generally started with the website of a customer who was having efficiency issues.

  • http://www.ecommerce.mi.it/ Simone Fantini

    Hi Colin, i would like also to share my benchmark

    Cache Backend: Cm_Cache_Backend_Redis

    Loading default test data…
    Loaded 10000 cache records in 3.44 seconds (3.2750 seconds cache time). Data size is 5010.2K
    Benchmarking 4 concurrent clients, each with 50000 operations…
    4 concurrent clients completed in 8 seconds

    | reads| writes| cleans
    Client 0| 7225.41| 1462.65| 1935.77
    Client 1| 7157.72| 1572.78| 1895.01
    Client 3| 7138.31| 1526.80| 1891.42
    Client 2| 7148.13| 1442.78| 1935.53
    ops/sec |28669.57| 6005.01| 7657.73

    how does it looks?

    i’m using redis for Sessions and for Cache..


  • http://www.iwebsolutions.co.uk/blog/modgrind-magento-performance-profiling/ ModGrind – Magento Performance Profiling | iWeb Blog

    […] Cm_Cache_Backend_File module – this is a very efficient Magento caching module (see this cache benchmark script if you want to test for yourself) and if it’s being used heavily that suggests that much of the page is able to be served from […]

  • http://www.iwebsolutions.co.uk/2014/01/modgrind-magento-performance-profiling/ ModGrind – Magento Performance Profiling – iWeb

    […] Cm_Cache_Backend_File module – this is a very efficient Magento caching module (see this cache benchmark script if you want to test for yourself) and if it’s being used heavily that suggests that much of the page is able to be served from […]