About

December 29th, 2014

I like computers.

Email: md5(‘cmblog’)@mollenhour.com

  • http://aliensgrin.com/ Your Name

    Hi Colin,

    I notice in your modman script you have a facility for using hard links instead of soft links but it is “not recommended”. Why is that? Was there a specific problem the hard links introduced?

    It seems to me that hard links might even be a better option as it means this will work independent of whether FollowSymlinks is enabled or not.

  • http://colin.mollenhour.com Colin Mollenhour

    The problems that come along with using hardlinks are that detecting and repairing situations where files were moved/renamed/deleted becomes very difficult. Also to uninstall a module becomes potentially much more problematic. I.e. with symlinks I can remove a module by deleting it and running modman repair (thanks to the easy detection of broken symlinks). Also other problems such as having the root of your site as a repository would cause weird repository conflicts, etc..

  • http://twitter.com/stefanhallen Stefan Hållén

    So this is probably totally the wrong place to ask this question, but how would one go about to execute a query like:
    { $or : [ { start : { $gt : 100}, start : { $lt : 200} }, { end : { $gt : 100}, end : { $lt : 200} } ] }
    In the mongodb odm wrapper? :)

  • Vinai

    Hi Collin, do you twitter?

  • http://colin.mollenhour.com Colin Mollenhour

    Yep: @colinmollenhour

  • Alex

    Hi Collin,
    I have a quick question about Magento config for Zend_Cache_Backend_Redis.
    For APC cache, in case of multiple websites/per server you need to use '<prefix>..</prefix>' . Do you need to do anything like that if you are using Zend_Cache_Backend_Redis ?
    Thank you!

  • http://colin.mollenhour.com Colin Mollenhour

    <meta content=”text/html; charset=UTF-8″ http-equiv=”Content-Type”>

    Maybe your comment didn't come through as expected, but I think
    you're asking about configuring a prefix to avoid keyspace collisions?

    The Redis cache does not use a configurable prefix. If you run more than one Magento installation on the same Redis server with the same database is then you will definitely have keyspace collisions. I
    would recommend using a different Redis database for each Magento installation. This can be done with global/cache/backend_options/database. You could also run a separate instance of Redis on a different port, which would let you control
    the amount of memory each installation can use.

  • http://colin.mollenhour.com Colin Mollenhour

    <meta content=”text/html; charset=UTF-8″ http-equiv=”Content-Type”>

    Maybe your comment didn't come through as expected, but I think
    you're asking about configuring a prefix to avoid keyspace collisions?

    The Redis cache does not use a configurable prefix. If you run more than one Magento installation on the same Redis server with the same database is then you will definitely have keyspace collisions. I
    would recommend using a different Redis database for each Magento installation. This can be done with global/cache/backend_options/database. You could also run a separate instance of Redis on a different port, which would let you control
    the amount of memory each installation can use.

  • Justin

    Hi Collin, I'm looking forward to using Modman, which seems like a great utility.  I just have a question about best practices when it comes to 3rd party Magento extensions.  This must be a simple or obvious question as I haven't been able to find anyone addressing it online, but in a nutshell:  is there a best practice for dealing with 3rd party extensions when developing a Magento store, using source control and modman?  I'd rather not check 3rd party extensions in, so are they something that wouldn't even typically be handled by modman?  Just install the extensions I want by hand in my production, staging and dev environments individually?  Thanks for any thoughts you may have

  • http://colin.mollenhour.com Colin Mollenhour

    Hi Justin. If you don't want to check third-party code into your repo then you can just check in a list of extension keys (extensions.txt) then use a shell command to make sure they are installed:

    @shell
    cd $PROJECT;
    mkdir -f .installed
    for ext in `cat $MODULE/extensions.txt`; do
    if [ ! -f .installed/$ext ]; then
    ./pear install $ext && touch .installed/$ext;
    fi
    done

  • http://colin.mollenhour.com Colin Mollenhour

    Hi Justin. If you don't want to check third-party code into your repo then you can just check in a list of extension keys (extensions.txt) then use a shell command to make sure they are installed:

    @shell
    cd $PROJECT;
    mkdir -f .installed
    for ext in `cat $MODULE/extensions.txt`; do
    if [ ! -f .installed/$ext ]; then
    ./pear install $ext && touch .installed/$ext;
    fi
    done

  • Gordon

    Hi Collin,

    I like your Redis Client Credis and corresponding Zend_Cache_Backend very much and would like them in some projects.

    But for a secure use i need immutable states of these projects to deploy them with a dependency manager like composer. Including the master is no option, because every commit could break my app on the next deploy.

    It would be greatly appreciated if you could create releases of both projects from time to time by just adding tags to stable commits on github with a regular name pattern (like release-x.x.x).

    Thank You

  • http://colin.mollenhour.com Colin Mollenhour

    Sure, Gordan. I just pushed 1.0 tags for both.

  • http://colin.mollenhour.com Colin Mollenhour

    Hi Alex,

    I don’t mention APC for several reasons:

    1. It can’t be properly tested via CLI and I didn’t have the time/interest to support it in the benchmark (sorry)
    2. It is not suitable for clustered environments so has limited usefulness 3. In production you are most likely going to be using APC as an opcode cache and unfortunately the memory pool for opcode cache and user cache is one and the same so you run the risk of your user cache and your opcode cache battling over the available memory (ask me how I know)
    4. Your CLI-based scripts will not be able to interact with the same cache
    I’m sure you could use APC with either of my File or Redis backends as the slow backend and the results would probably be very good assuming you were using the simplified two-levels backend. However, Cm_Cache_Backend_File is very fast by itself due to the filesystem’s own caching so I question if you will get much improvement from using APC. To me the pros do not outweigh the cons.

  • Grant Flynn

    Hi Colin,

    JUst wondering if you Cm_DieHard full page cache is completed and working? I’m guessing not as it sounds very interesting, but doesn’t seem to be a mention of it anywhere.

    THanks for CM_Cache_Backend_File, by the way. Site just about to go live.

    Cheers,
    Grant

  • http://colin.mollenhour.com Colin Mollenhour

    Hi Grant,

    Cm_Diehard is not complete and probably not working. No idea at this point when it will be completed.

    Cheers!

  • Juergen

    Hi Colin,

    Thanks for all your great work on the Redis cache backend. I’m using it in production with great success. I’m wanting to use it as well for Magento Enterprise installations as the storage option for the Full Page Cache.

    So I added a configuration in enterprise.xml in the full_page_cache container, but it does not appear to be using redis at all. Can your redid extension work as a replacement for the apc/memcached options for Enterprise Full Page Cache?

  • http://colin.mollenhour.com Colin Mollenhour

    I can’t imagine why it wouldn’t but I don’t have EE so I can’t help you, unfortunately. If you find a bug or fix please report it on the github issue tracker. Thanks!

  • Michael Reeves

    Hi Colin,
    First of all I wanted to say your magento redis cache module is fantastic. Thanks for all your hard work. Modman is also great and a big help! 

    I had a few questions/comments on your Cm_RedisSession module.
    1. I would think the default behavior if the redis server settings are not set for it to default should be to use the db for sessions and then file system. The current behavior has it presume there is a redis server at 127.0.0.1:6379. I didn’t realize that and thought that if these settings were not set in local.xml it would just use db. I got odd messages like “Redis server went away”.

    2. Is there a reason why Max_Concurrency is set to only 5? I am getting a fairly large amount of ‘Session concurrency exceeded’ error messages. I found that some people are thus getting the 503 error and I didn’t know why that was. This seems like people that open several tabs at once and more importantly crawlers like googlebot may see this quite often. Is this by design? 

  • http://colin.mollenhour.com Colin Mollenhour

    Hi Michael,

    1. I think if you do not want to use Redis you should probably deactivate it in the app/etc/modules/Cm_RedisSession.xml file. I see your point, but this is subjective and as there may already be people using it with the defaults then I think it is best to leave it as-is to avoid unexpected problems for those people.
    2. In my tests, it took some pretty abnormal activity for a user to exceed 5 concurrent sessions (e.g. I just opened 13 tabs as fast as I could and only two gave 503). However, I recently fixed issues caused by processes that died from fatal errors, which may have been causing the issues you were seeing with users. Googlebot does not use cookies so each request will start a new session so it shouldn’t be an issue here. I suppose the sensitivity to concurrency will vary by the speed of your server but if it is very slow then you probably don’t want single users running 10+ concurrent requests anyway.. That said, I’d like to move all of the constants into configurable fields, so would be happy to accept a pull request! :)

    Thanks,
    Colin

  • Anonymous

    Hi Colin,

    I’m using your Cm_Cache_Backend_Redis module and it’s awesome!

    Just wondering if I can specify a cache lifetime (259200, for instance, as suggested on http://www.nbs-system.co.uk/blog-2/magento-optimization-howto-en.html) within and if that would actually have any effect in having a longer duration for the cache.

    Thanks!

  • http://colin.mollenhour.com Colin Mollenhour

    The config/global/cache/lifetime is an option for the cache frontend, not the cache backend and only sets the *default* cache lifetime which can still be overridden. So yes, it will work as expected. Whether or not you want to set it that high I don’t know.. Having it that high means that anything that is cached with the default lifetime that may somehow become moot would be consuming memory for 3 days.. E.g. if you cached data per session. The LRU eviction algorithm would somewhat make up for this though, but evictions do have an overhead.

  • Anonymous

     Hi Colin,

  • Thomas Steigerwald

    Nice contribution with the Cm_Cache_Backend_File work. Here’s some results from a recent test of mine.

    Cache Backend: APC (Zend_Cache_Backend_TwoLevels) + Cm_Cache_Backend_File
    Loading default test data…
    Loaded 10000 cache records in 5.97 seconds (5.7765 seconds cache time). Data size is 5038.2K
    Benchmarking 4 concurrent clients, each with 50000 operations…
    [email protected] concurrent clients completed in 99 seconds

             |   reads|  writes|  cleans
    ————————————
    Client  2|29041.02|   20.35|    0.55
    Client  1|29540.26|   19.11|    0.56
    Client  3|29057.48|   26.98|    0.62
    Client  0|30166.52|   32.51|    0.72
    ————————————
    ops/sec  |117805.28|   98.95|    2.45

  • http://colin.mollenhour.com Colin Mollenhour

    Note that APC cannot accurately be tested with the current benchmark because APC lives inside a single process and the benchmark uses multiple processes. That is, one process pre-loads the cache data and then dies, at which point the APC cache dies and the 4 concurrent processes are all operating on a completely empty cache so the reads are artificially high due to just returning false instead of actual data. No doubt APC is fast, but the lack of tags support makes it a no-go IMO..

    It looks like there may be an issue with Cm_Cache_Backend_File when used with TwoLevels although I have to say I’m not really motivated to fix it since it works best by itself anyway..  :)

  • Anonymous

    I thought something was off with the read / write ratio, as to why I posted it previously.  In any event I’m using Cm_Cache_Backend_File now by itself and quite happy with the results.  Keep up the good work.

  • Steve Holdoway

    I’m reading up on your Cm_RedisSession module for Magento. Will this fully support the use of multiple backend sessions with no session stickiness at all?

    I’m trying to get my head around the locking strategy, and whether shifting backends will cause a BREAK_AFTER second delay.

    Many thanks.

  • http://colin.mollenhour.com Colin Mollenhour

    Yes, the Cm_RedisSession module allows multiple frontend nodes to share one Redis server for storage thereby relieving the need/utility of sticky sessions. This let’s you add/remove nodes using a load-balancer or round-robin DNS or other method without ever losing sessions or having balancing issues.

    The locking strategy uses atomic increments which return the new value so in the case of the session not already being locked there is no waiting for a lock.

  • Anonymous

    I’m running this with 3 x PHP servers supporting http: ( and one https: ) all behind an nginx server, which defines all 3 in a single upstream block, each with a fail_timeout of 30s.

    On each of these servers, the local.xml connects to the remote redis server, config the same as your example, except for a max_concurrency of 32.

    I’m receiving a number of error messages – on average 5/sec/server,

    Unable to write session, another process took the lock: obkiiqbv4tctrjjhkd6osbrll3

    and

    Broke lock for sess_obkiiqbv4tctrjjhkd6osbrll3. Tried 32 times. Lock: 5, BREAK_MODULO: 5

    as examples. Less frequently, I see

    Detected zombie waiter for sess_gkae5da1k9ljkkuqr0v7lsmt60 (3 waiting)

    Any ideas what could be causing this – where should I be poking around?

    The redis server reports a peak of approx 5 connections / sec, and currently holds 60,000 sessions – a quad CPU virtual server, with 4GB/75% full – load ave usually below 1.

  • http://colin.mollenhour.com Colin Mollenhour

    Something is definitely off.. Note that max_concurrency is *per session* so unless you want to support users loading 32 pages all at once then it doesn’t need to be that high. Setting it low protects you from users tying up too many resources by giving them 503 errors when they exceed max_concurrency.

    Do you have any PHP fatal errors? Those will cause the “Broke lock” messages since the fatal error causes the session to remain locked. The next page load for that session will break the lock.

    Is it possible you have some pages that load extremely slowly or perhaps a user that is abusing your server?

    Using persistent connections to Redis works well in my experience. Might give that a try, but I don’t really know what else to suggest. I’m using 3 nodes as well, with currently 550k sessions and avg 100 connected clients. There are some message in the log files, but nothing like what you’re seeing as far as frequency goes.

  • Anonymous

    No, no php (5.3.21 off dotdeb) errors reported apart from the odd pool busy warnings. I am running suhosin though.

    I’ve no reports of abuse, nginx reports a peak of c. 100 requests/sec, 400 concurrent connections, 20mb/s internal traffic, 5Mb/s external… heavy lifting performed by a separate CDN.

    I’ll try lowering the max concurrency and see what changes. Last time I tried the persistent stuff I got errors for some reason. I’ve just been playing with it on the staging server, and I can’t replicate them.

    If they make a difference, I’ll look at using persistent connections for the cache too.

    Thanks for the pointers.

  • Jérôme Siau

    Hi, 

    About Cm_Cache_Backend_Redis, i would like to clean some ids by tag in a module.

    Here what i’m doing : 

    $cache = Mage::app()->getCache()->getBackend();
    var_dump($cache->getTags());

    I want to use the clean function, but tags have a prefix and i don’t understand where it come from.
    Can you help me ?

    Thx

  • http://colin.mollenhour.com Colin Mollenhour

    The tag prefix comes from the Magento wrapper (Mage_Core_Model_Cache). So Mage::app()->cleanCache($tags) or Mage::app()->getCacheInstance()->clean($tags)

  • Cherpin Dmitry

    Hi Colin,
    I installed your plugin Cm_RedisSession  Plugin work, but I can not go to the administration panel Magenta. I update the cache, tried to disable it altogether. Please tell me how can I solve this problem?

  • MakeGoodMedia Toronto

    Hi Colin,

    Is there any chance of installing the extension without modman? If so could you provide a link pls.

    cheers!
    Steve

  • http://colin.mollenhour.com Colin Mollenhour

    I don’t know what extension you are referring to, but in general you can manually copy the files to the correct locations using the file named “modman” as a reference (left side is source, right side is destination) if for some reason you can’t or don’t want to use modman. Remember to checkout git submodules if necessary.

  • Amin Bhamani

    Does your Hide Out of Stock Items extension work with Magento Community 1.7?

  • Coen Swaans

    Colin, great Redis module only got one problem… :) how can i use files as backup/fallback? On our local machines or development server we don’t got Redis…. memcached falls back to file based cache if not present.

  • http://colin.mollenhour.com Colin Mollenhour

    If you use a different app/etc/local.xml for each environment it shouldn’t be an issue to configure files as the backend when Redis isn’t present. The fallback you speak of seems to be a feature of Mage_Core_Model_Cache so I think that is outside of the scope of what Cm_Cache_Backend_Redis should handle.

  • Coen Swaans

    We use a single Git repository to “feed” all our environments (local, acc, prod). Already got an fix for the problem… changed loadBase() to read different config files based on http_host. Thx for the reply and great work with your modules!

  • Simone Fantini

    Hi Colin, thank for contribution, very precious for open source world. Just wondering how to implement Redis cache (and your module, obviouslly) on my Magento EE, using my enterprise.xml and full_page_cache.
    Can sound strange, but also on Magento website i can not find a stable guide on how to work with full_page_cache

    thank you
    Simone

  • http://colin.mollenhour.com Colin Mollenhour

    Hi Simone, I am not a Magento EE partner and therefore don’t even have access to EE code and as such know almost nothing about EE. However, I think this type of question would fall under the scope of the service provided with your purchase of Magento EE license, at least I would hope it would for the money you paid.. :)

  • Simone Fantini

    Will indeed ask for support to Magento EE team, i just thought that in past you had the chance to play on a EE with Redis Cache storage.
    regards
    Simone

  • http://colin.mollenhour.com Colin Mollenhour

    I dunno about enterprise.xml but other community members have added an example configuration for app/etc/local.xml in the README.md file.

  • Yvo Alen

    Collin, looking for help to tune Magento for 5-7 mln books. Was reading about Mongodb, can anyone (you) provide me a solution budget for this. What is needed for Magento CE, the steps and timing and finally the budget.

    Thank you,

    Yvo

  • Richard

    Is there an fallback system to APC or file system when the redis server stops ?

  • http://colin.mollenhour.com Colin Mollenhour

    No. I don’t think it is a good idea either. Just keep that redis server running. :)

  • Richard

    ok too bad.. we had an problem with an script which has an lot of memory consumption that means crashing of redis server somehow. But i created an workaround to fallback with normal magento cache.. now i am trying to optimize redis server any tips ? ;-)

  • http://colin.mollenhour.com Colin Mollenhour

    If using a single node then just use the Cm_Cache_Backend_File backend as it is faster in most cases. Otherwise, either put database and web server on different machines or make sure that web server max clients is configured so that memory consumption is limited properly.

  • Anonymous

    Hi Colin,
    First of all I’d like to thank you for the great community contribution.

    I have a quick question and I hope you can clear/confirm something.
    Currently we have a 1 server with magento with Cm_Cache_Backend_File and Cm_Redis_Session.

    As you know, this Cache backend is faster than Cm_Cache_Backend_Redis.

    Our multiple tests also confirm that. One thing I’d like to notice is that you mentioned many times that placing /var/cache on tmpfs doesn’t improve anything. Our tests though show that it does make sense to use tmpfs esp if /var/cache grows. Not to mention that you reduce IO on the primary disk.

    Anyway, my question is about 2-3 server configuration. We want to switch to a two-server configuration with load balancer.

    Is it possible for 2 servers to have their own set of /var/cache but connect to the same Redis server for session storage.

    What are the complications of that setup? I didn’t see anything user-specific stored in the cache. Or am I wrong?

    Another question I wanted to ask you. It’s possible to setup PHP session_handler globally to store session on REDIS server. What are the benefits of using Cm_Redis_session vs PHP session_handler ?

    Thank you!

    Alex

  • http://colin.mollenhour.com Colin Mollenhour

    Hi Alex,

    In general I’d say if you are on a single-server setup use the file-system backends for both cache and sessions and when you move to a multi-server setup use Redis for sessions and Redis or MongoDb for cache. The filesystem backends are not meant to be used with multiple nodes or on a shared filesystem. Also, running fs backends on tmpfs may benchmark faster, but I don’t think the added risk is worth the benefit since if your tmpfs disk fills up, things might get hairy.

  • Anonymous

    Hi Colin, thank you for your reply. But what are the possible complications if we still leave /var/cache independent on each server? I don’t see any user-specific data stored. Moreover, you can remove cache (delete all files) when the server is running and it doesn’t change anything it seems. Also, what about my second question -> It’s possible to setup PHP session_handler globally to store session on REDIS server. What are the benefits of using Cm_Redis_session vs PHP session_handler ?

  • http://colin.mollenhour.com Colin Mollenhour

    Cache invalidation will only happen on the server where it is initiated, so if you save a product on machine A, then machine B will not get cache invalidated.

    Thie biggest difference is that Cm_RedisSession module supports locking. Also, Cm_RedisSession can be used without the phpredis extension installed. It has a few other features as well like giving shorter lifetimes to crawlers, etc..

  • Pradip Shah

    Hi Colin,

    We have used your plugin for many of our customers and great to see it included in EE 1.13. You are a great community champion,

    One of our customers has a specific issue with the session cache – when enabled, we find that it gets into a lock quite often killing page load times.

    Specifically we find that many dumps of the php process when it slows down is in the function read in usleep(). I really do not know what this statement does
    $lock = $this->_redis->hIncrBy($sessionId, ‘lock’, 1);

    but clearly the site was not under attack since I have traced it when my access became slow. Any pointers to debug this issue?

    Regards

    Pradip

  • http://colin.mollenhour.com Colin Mollenhour

    Redis doesn’t have it’s own locking mechanism so the extension uses an “optimistic” locking algorithm. This usually works fine unless your code has fatal errors in which case the lock is not “released” as it should be. If you think there is a bug, please open an issue on github. Thanks!

  • SNH

    Hi Collin, I was wondering if this article is still relevant and applies to 1.8.1

    http://colin.mollenhour.com/2009/07/14/hiding-out-of-stock-items-in-layered-navigation/

    We are still seeing this strange behavior and really want to get rid of it. http://magento.stackexchange.com/questions/13512/magento-hiding-and-not-counting-out-of-stock-in-layered-navigation

    Any help appreciated. Many thanks

  • http://colin.mollenhour.com Colin Mollenhour

    It is very old and the feature was officially supported in 1.4+. I’m fairly certain that the way Magento officially supported it is basically the same as my method.

  • Anonymous

    Hi Collin,

    I’m having troubles with lzf …

    If I activate it, then I can’t login to admin panel and the sessions aren’t working properly… I have to change to gzip.

    I’m using PHP-FPM, is that the problem?

  • http://colin.mollenhour.com Colin Mollenhour

    No, should work just fine with PHP-FPM.. Please open a ticket with more details on github if you find a bug.

  • http://www.barproducts.com Denis Baldwin

    Hey Colin – Would you be interested in some paid consulting for magento? We want to install redis and we’re having all kinds of issues installing and configuring it. Have root on the box. followed your directions. not quite sure how to proceed. If so, what would you charge to do the install/configure? Please let me know at [email protected] dot com.

  • James D

    Hi Colin,

    We’re using your Cm_RedisSession_Model_Session on our Magento site and in general it works great apart from the odd occasion where sessions get locked for a large amount of time. I’ve been looking into this and I was just wondering whet the following code chunk is trying to do:

    // Otherwise, add to “wait” counter and continue
    else if ( ! $waiting) {
    $i = 0;
    do {
    $waiting = $this->_redis->hIncrBy($sessionId, ‘wait’, 1);
    } while (++$i _maxConcurrency && $waiting < 1);
    ..

    Is the 'while (++$i _maxConcurrency ..’ just there to catch situations where the waiting state has somehow incorrectly become less that 0, or am I missing soemthing?

    Cheers,
    James

  • http://colin.mollenhour.com Colin Mollenhour

    Yes, that is correct. If you think there is a bug please open an issue on the github page. If you can reproduce the issue that would be fantastic. I definitely think the locking code could be improved, I just don’t know how so code review is welcome.

  • Kishan Rajdev

    Hello Colin,

    I am currently using redis for my website. It works fine,except certain scenarios. Checked error log and showing session is getting locked. On refreshing the session. It again start working. Can you please help to resolve this query?

  • http://colin.mollenhour.com Colin Mollenhour

    Make sure you are using the latest version from github and tune the configuration to your liking. Infrequent locking issues are not a huge concern; don’t expect to never get them.

  • Philip Lee

    Great extension. I have my admin on a separate server and subdomain from my app servers. I can’t clean the stuff that the app servers make from the admin server. It looks like the app servers make stuff that looks like “zc:ti:109_”, while the app servers keep trying to delete stuff that looks like “zc:ti:403_”. After a lot of poking around, I tried running this: “echo Mage::app()->getCache()->getOption(‘cache_id_prefix’);” from both kinds of servers. The admin server returned “403_” while the app servers returned “109_”. Any idea why this could be happening, and what I could do to fix it. Thank you.

  • Angel Martin

    Hi Collin, we need to add the product name to orders and found your Cm_OrderProducts which seems to do in a very simple way what we need. We do not have modman and just copied the files to the directories according to the modman file as you suggest another friend in this blog. After cleaning cache we are not able to see this new column in the Orders grid. Are we missing something? How can we troubleshoot it? Thanks a lot

  • Andrea Merli

    Hi Colin,

    thanks for your work.
    I’ve a little question can I use Cm_Cache_Backend_Redis
    with redis cluster out of the box?

    Thanks

  • Alessandro

    Hi Colin, we are using you Magento Redis Module, but we found a problem when we flush cache and we don’t clean Session, always in redis, sometimes and only someone go on page 500 error, and we find that the problem is on line 211 in file /lib/Cm/Cache/backend/Redis.php, stack overwlow error.

  • http://colin.mollenhour.com/ Colin M

    Hi Alessandro, please report the issue on Github and make sure you are using the latest version (master on github). Also, I strongly recommend against using the same Redis instance for both cache and sessions (use separate instances listening on different ports/sockets).

  • Abhishek Pandey

    Hi Colin,
    I will start new Ecom website using Magento (the idea is to use mongodb for some modals for performance improvement) so then I came across your module on github. I was going to use your https://github.com/colinmollenhour/magento-mongo module. But before I dive deep into it can you help me answer couple of questions . Would be of grt help.

    1. Will this module work from Magento Admin Panel also. I mean we don’t have to do any magento core code changes and Admin Grid and all will work and they will fetch the data from new MongoDb not Mysql.
    2. Does it support all the modals of Mangento or few of them . Can you please name them which all modals are stored in MongoDB (and not using Mysql for it).

    Thanks
    Abhishek

  • http://colin.mollenhour.com/ Colin M

    Hi Abishek. There is no code that relates to the Mage_Catalog or other models as it is not a drop-in replacement. You could use it as a base to create your own Mage_Catalog replacement but it would still be a lot of work. I updated the README to try to make this more obvious.

  • Anonymous

    Do you know if Heartland’s SecureSubmit Payment Module will work with an older version of Magneto? Magento Connect states Compatible with: 1.5, 1.6, 1.6.1, 1.6.2.0, 1.7, 1.8, 1.8.1

    I am presently running Community Ver. 1.3.2.4 and been using Heartland’s WebConnect. Just got a letter from Heartland saying WebConnect is being discontinued and I need to switch to SecureSubmit.

    I am not about to upgrade my Magento site to the latest version of community since I just don’t generate the revenue from the website.

    Any insight would be appreciated since effect 4/1/2016 Heartland’s Webconnect will no longer process credit card which then puts me out of business.

  • http://colin.mollenhour.com/ Colin M

    Well you can always download it and try it on your test environment, which I’m sure you have because surely you aren’t asking me to tell you that it’s all clear for you to install it on your very old an no doubt begging to be hacked production installation. right? :)

    I don’t know what to make of the conflicting statements that you just don’t generate revenue on your website but without an updated payment extension you will be put out of business.. If you’re not backporting every security update then eventually you *will* get hacked. If you’re somehow certain that you have no security risks then the easiest way to keep operating would be to modify the SecureSubmit extension as necessary (no idea what is needed, just assuming that it will need minor changes to work with that old of a version). Perhaps a fully hosted solution like Shopify would be more appropriate for your current business needs?

  • Kim K.

    Hi Colin,

    Is this current for Magento 1.9.x?
    https://github.com/Vinai/Symlink-Cache

    We’re looking into speeding up the Add to Cart actions. Tried all else so this is next.

    Thanks

  • http://colin.mollenhour.com/ Colin M

    That backend is very slow for normal write operations. You should use Cm_Cache_Backend_File if you just have one machine.