Given a propositional formula ψ, the model counting problem, also referred to as #SAT, seeks to compute the number of satisfying assignments (or models) of ψ. Modern search-based model counting algorithms are built on conflict-driven clause learning, combined with the caching of certain subformulas (called components) encountered during the search process. Despite significant progress in these algorithms over the years, state-of-the-art model counters often struggle to handle large but structured instances that typically arise in combinatorial settings. Motivated by the observation that these counters do not exploit the inherent symmetries exhibited in such instances, we revisit the component caching architecture employed in current counters and introduce a novel caching scheme that focuses on identifying symmetric components. We first prove the soundness of our approach, and then integrate it into the stateof-the-art model counter GANAK. Our extensive experiments on hard combinatorial instances demonstrate that the resulting counter, SYMGANAK, leads to improvements over GANAK both in terms of PAR-2 score and the number of instances solved.