Description:
Abstract
USC researchers have developed an optimization technique that finds the best data location scheme for PIM systems. This technique improves both performance and energy consumption. Experimental results demonstrate that this technique improves system performance by 9.8x and achieves a 2.3x energy reductions, compared to conventional systems.
Benefit
- Improves performance of memory intensive applications
- Improves system performance by 9.8x
- Achieves 2.3x energy reductions
Market Application
The big data analytics market will grow to more than $203 billion by 2020. This era of big data enables programmers to write memory intensive applications. Traditional CPUs, however, are unable to handle big volumes of data with a fast response, and memory bandwidth becomes a bottleneck for those applications.
The bottleneck occurs when data stored in memory must be moved to the CPU before any computations can be performed. This data movement degrades system performance because it takes time and consumes energy. One technique to avoid this bottleneck involves locating the computer processing units near the main memory. This technique is referred to as processing-in-memory (PIM).
The entire computing industry, including Internet of Things (IoT), cloud, edge and fog computing are moving toward PIM to avoid these data movement challenges. A technique is needed that optimizes PIM by automatically determining where data should reside so that data movement is reduced.
Publications
Xiao, Yao, Shahin Nazarian, and Paul Bogdan. "Prometheus: Processing-in-memory heterogeneous architecture design from a multi-layer network theoretic strategy." 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2018.
Stage of Development
- Experimentally validated
- Available for licensing