

Whether it is video content for entertainment purposes, articles, or information for managers in the organization.
Apart from the content, the user experience (the experience of the content’s consumer) is very important for customer retention. Providing a great user experience depends on many factors, including technological and operational.
I cannot overstate the importance of the content management system and the technological solution for a successful customer engagement and great user experience. In light of behavioral analysis, we found that the main factor that influences users’ decisions to purchase premium content is the content consumption experience, including ease of use and personalization. This is even more important than the content quality.
The user experience is the outcome of a number of factors that collectively contribute to the result. Some of the issues depend on the technology and the application infrastructure that support the content solution. For a positive user experience, the solution must provide fast response and high performance, with enough margin to ensure that a good experience will be provided during periods of peak activity, immediate display of updated content, efficient content production, and effective and stable APIs.
Until recently, we recommended large-content sites on a multi-layered application and computing architecture. External layers that deliver updated static content (by content type), an internal IT infrastructure consisting of a content management system, a digital representation layer, an API system, and a VM-based infrastructure with different applications and a load balancer.
Internet traffic has reached a CDN layer that returns static content to the visitor. Since this is a content site, which is updated frequently, a mechanism was needed that could update the content frequently and be initiated from the application.
In many cases, the applicable computing infrastructures were within the organizational network (or in the organizational cloud) and included a few layers such as protection, traffic management, application servers, API servers, and databases. The required system resources were derived from the amount of computing required during peak periods. So, if, for example, there is an event that takes place once a month, the amount of computing has to be maintained throughout the life of the application (with a slight reservation regarding “reserved instances”).
In light of the relative complexity and the cost involved, smaller publishers and other content sites were satisfied with a much more modest system, with limited recovery, limited response to content consumption APIs, and limited API (if any). On the other hand, they enjoyed lower computer infrastructure costs.
Linnovate has a long history of building large content solutions in a more or less similar architecture as mentioned above. The Linnovate solution for content sites is based on the republish system, which uses Drupal distribution, Elastic, GraphQL, Redis, and other technologies. In recent months, we have received requests from smaller publishers to use our solution, but who aim to provide a high-end experience offered by large sites, but at a lower cost. That presents a great opportunity to optimize our solution and minimize computing and maintenance costs.
The new version we released uses a micro-services architecture, dockers, Kubernetes, and a cloud-based DB service — an architecture that uses new technologies that take better advantage of system resources. We also challenged ourselves to consider using every scalable cloud-based infrastructure that could be used in order to reduce costs.
We used WAF systems and network filters that provide a layer of protection and acceleration (CDN). The CDN capabilities of these systems are not optimal for large content sites (e.g. TTL per content item), so we added a local layer of smart CDN, which complements the missing features.
The computing infrastructures run on a Kubernetes cloud, which executes a number of micro-services. Each service is backed up so that we receive a micro-services scalable HA infrastructure, with lower costs compared to the previous architecture we presented.
We also decided to use the cloud-based DB service instead of a private DB cluster (in the enterprise data center or private cloud). From the experience we gained, we reached the conclusion that for relatively small sites, a DB service solution will be cheaper (infrastructure + maintenance) than the DB cluster.
Another important component is the API, through which content can be accessed by various content applications, such as mobile or digital magazines, etc. For smaller sites, we simplified the digital representation layer with a dedicated API cache.
This architecture also required changes to the CI/CD and ALM processes, which I will elaborate on in another article.
Our latest release, with the cloud-native concept and cloud-based services, improved not only infrastructure costs but also the maintenance and deployment efforts (costs), which opens new opportunities for us with a segment of customers that, until recently, we could have not considered.