Technical Weekly Report for the third week of February 2023

This week was focused on dealing with a risk item that was discovered before the holidays. A service that was using Redis and not setting a TTL for the key, but was banking on the redis elimination policy. I see that this service is using Redis with an LRU elimination strategy set up. This strategy may seem perfect, but there are pitfalls when there is a lot of write traffic for a certain shorter period of time. This is when Redis triggers the elimination process and focuses its best efforts on this in order to be able to free up enough space. This means that Redis can’t perform normal operations such as queries very well. This causes dramatic fluctuations in both read and write latency to Redis from … Read more

Weekly Technical Report for February 2, 2023

From the end of January to the beginning of February, it falls under the Chinese New Year. During this period, the person responsible for securing the operation of the Spring Festival phase needs to be on call to deal with online issues. I was in a constant state of worry, and the good thing is that online problems did not come to me actively. Maintaining overall immobility throughout the Chinese New Year is the best. This week I’m evaluating the impact of a major requirement. I believe that for a new business requirement, especially when applied to a complex business system, there are multiple impacts that need to be considered. If, at this point in time, one is not particularly familiar with the system and has little experience … Read more

Understanding the CPU Resource Allocation Mechanism in Kubernetes

Kubernetes (k8s) is a popular container orchestration platform that allows developers to deploy, manage, and automate containerized applications in a cloud environment. In Kubernetes, CPU resource allocation is a critical issue that directly affects the performance and reliability of applications. In this article, we will introduce the CPU resource allocation mechanism in Kubernetes, including CPU requests and limits, the CPU Share mechanism, and CPU schedulers and other related concepts, to help developers better control the CPU allocation of containers, so as to improve the performance and reliability of applications. CPU Allocation Units In Kubernetes, CPU is allocated in millicpu, with one CPU resource equal to 1000 millicores. For example, a Pod requesting 0.5 CPU resources can be expressed as 500 millicores. It is realized based on the CPU … Read more