Blog Series (5) - Cloud-Native System Performance
As an architect, tuning cloud-native systems for performance improvement, you have to deal with and think about multiple factors. Factors - some of those are obvious and some of those are not.
In this series I have compiled various challenges and cloud-native approaches to help improve the overall system performance. The blog posts in this series underpin the fact that addressing performance related issues is a matter of identifying the bottlenecks, doing the right trade-offs.
This is essentially my written down mind-map when I have to drive performance related discussion with fellow architects or executive teams.
PS: There is also a free eBook available to be downloaded. (PDF & ePub formats)
Series Introduction
When it comes to architecting any IT system, performance is one of the key aspects. This blog post introduces and announces a series of posts dedicated to Cloud-Native System Performance.
Improvising Compute
Applications are deployed on infrastructures to address a specific business purpose. Poor development results in overuse of underlying capacity, while a well-developed application can execute the same task with less resource consumption. Resource consumption is inversely proportional to profits. Better performance results in low resource consumption, which results in executing more tasks for the business.
Better Approaches To Data
Cloud-native system performance depends on data in so many ways - the way the data itself is stored, the location where it is stored, the format and protocol of its transmission, etc. Even the business-oriented aspects play a crucial role in classifying the data and imposing appropriate security restrictions, which add additional compute delay.
Better Network Performance
Not just cloud-native, network and the internet have always been contributing factors for introducing latency in the system performance. A system that resides in one continent always appears to be slow for its users residing in a different continent.