

The pause is a good opportunity to catch up with everything that happened during the week that has nothing to do with work.
I open the newspaper app on my phone to read the weekly column of a writer I like. Somehow the column does not appear in the app. The app’s search engine finds only previous columns. Strange; the week had been full of events, and my writer would not miss the opportunity to voice his opinion on a week like this. I try the magazine’s search engine using an internet browser, and surprisingly, the column appears.
Why did the article appear using a browser, but not the app? There could be several reasons, but I assumed that the difference lay in the caching mechanism. When it comes to designing or installing a content system that serves content to a broad consumer audience, no subject raises more discussion than the caching architecture. Perhaps this is because caching is not a monolithic element, but consists of a collection of different services, or maybe it’s because caching is a big issue itself.
The role of caching is to optimize system performance; immediate response without the burden derived from the process of recreating the information. Like every good thing, there is a cost, and in the case of caching, in general, the price we pay is having a copy of the information that may be different if recreated at a specific moment.
Every technological system that comes to my mind has a caching element. The reason caching is used everywhere is that the benefit is much higher than the cost. Within each CPU there is cache memory that saves access to the primary memory component. In other words, when you run PHP code, you usually use the PHP cache, and when you access a busy database, you use the acceleration mechanism, which is caching. Almost all content on a news site comes from edge servers that hold a copy of the original information.
The caching elements are part of the technical architecture of the solution and designed to meet specific performance requirements. For example, in real-time systems, which are required to make decisions measured in milliseconds, the use of caching will be proportional to the needs. Consider a spacecraft in the process of landing on the moon; the spacecraft’s command system will obtain real-time information from a collection of telemetry sensors. If the information that arrives at the command system is not up-to-date, the decisions it makes will be incorrect. Therefore, if caching is being used, updating the caching mechanisms in such systems will occur at a higher rate of a refresh than the sampling rate for the decision-making. So, on a content site, for example, it is likely that a review article of a new vehicle will not be updated as frequently as a home page of a news site. And a home page will be updated at a lower frequency than a flash news page. Therefore, if, for example, a home page is updated frequently and each update takes about a minute, it makes sense to keep a copy of the page for no more than 30 seconds.
Considering the Republish system, there are a number of application-level caching mechanisms (without considering the hardware or software infrastructure). For example:
Returning to the example of the newspaper’s content system above, I assume that the mobile app addresses the information using an API with a caching mechanism, or that the answer to the query comes from the cache. Either way, the results that came to the app were from a time before the column was published in the content management system, so the copy of the information received did not include the column I wanted to read. It is likely that, if I had waited longer, the cached version would have expired and the updated information would have been available to users of the app.