DPACK Basics: Data Aggregation
Data aggregation is an important topic to understand with DPACK as it is a key fundamental of the program and provides statistical analysis of compute utility in IT environments.
Data aggregation is any process in which information is gathered and combined for multiple systems and expressed in a summary form, for purposes such as statistical analysis.
DPACK provides data aggregation and analysis of performance metrics of servers within a compute environment. It also provides statistical data to assist suppliers and consumers of technology with an understanding of their compute utility.
Previous methods of understanding compute usage within an IT environment consisted of measuring the “high water” marks for all of the systems and totaling the amount. As the industry moved to consolidated resources such as cloud or virtualization this proved to be a misleading practice that lead to overcompensation.
Consolidated resource environments must take into account the “spikey” nature or workloads and what happens when those patterns are blended or “aggregated” together. DPACKs aggregation algorithms can more accurately define compute utility and drastically reduce cost associated with acquiring technology. Aggregated data is displayed as both summary and in graphical format. It is the default view of all projects.
These aggregate values are now a hardware/platform independent perspective of compute utility that must be solved for when upgrading or migrating the environment.