(2015~) Efficient Smart Memories for Data Intensive Computing
Near-data processing is becoming a promising technique to reduce data transfers. Adding processing capabilities inside or close to the DRAM has a high potential for performance and energy efficiency improvements by avoiding huge and inefficient data transfers. The main objective of this project is to provide software and hardware new insights for processing-in-memory (PIM) context, improving thus the execution time and the energy consumption.
This project received the Serrapilheira Grant of R$ 100.000 for the first year (2018) and a second grant of R$ 1 Million for the second phase (2020~2023).
(2016~) Migrating Database Operations to In-Memory Processing
A large burden of processing read-mostly databases consists of moving data around the memory hierarchy rather than processing data in the processor. The data movement is penalized by the performance gap between the processor and the memory, which is the well-known problem called the memory wall. The emergence of smart memories, as the new Hybrid Memory Cube (HMC), allows mitigating the memory wall problem by executing instructions in logic chips integrated into a stack of DRAMs. These memories can enable not only in-memory databases but also to have the potential for in-memory computation of database operations.
(2016~) Thread and data placement for HPC
Many high-performance computing applications present different phases during their execution. Nevertheless, thread and process placement techniques usually provide static only methods to improve the data and thread locality. Similarly, cloud computing data centers may present variations in terms of latency over the execution time of applications. The main objective of this project is to propose techniques to provide the best static and dynamic mapping of parallel applications considering their resource usage.
(2016~) Expansible Network-on-Chip
Interconnection has great importance to provide a high bandwidth communication among parallel systems. Network-on-Chip (NoC) is the current intra-chip interconnection choice. However, NoC is formed by static structures hardly allowing the inclusion or removal of elements. Targeting this issue, a new paradigm emerges the Expandable Network-on-Chip (ENoC). It allows on-the-fly interconnection of different NoCs or chips by unifying them. The inter-chip communications are performed over a high bandwidth and very limited range wireless channel. Nevertheless, connecting different systems from different manufacturers raises huge security issues.
(2012~) Cache line usage predictor
In recent years, the constant reduction on the transistor size allowed the cache memories to great growth in capacity. Nowadays, the cache memories occupy near to 50% of the processors’ chip area, this increase was also driven by the memory wall and dark silicon issues. However, this capacity growth influences the energy consumption to maintain the data and operate over such big cache memories. This makes the energy consumed by the caches an important study area. There are many existing methods to save the energy consumed by these caches.