Chip architectures, and even local system architectures, long have found that the best way to improve total system performance and power consumption is to move memory as close to processors as possible. This has led to cache architectures and memories that are tuned for those architectures, as discussed in part 1 of this article. But there are several tacit assumptions made in these architectures that do not hold true when we start looking at larger systems. Discussions about the IoT and cloud computing have brought many of these to the forefront. The industry is still evolving, and it is not clear that the perfect solution has been found yet.
Read more at Semiconductor Engineering.