(2015~) Efficient Smart Memories for Data Intensive Computing – Near-data processing is becoming a promising technique to reduce data transfers. Adding processing capabilities inside or close to the DRAM has a high potential for performance and energy efficiency improvements by avoiding huge and inefficient data transfers. The main objective of this project is to provide software and hardware new insights for processing-in-memory (PIM) context, improving thus the execution time and the energy consumption.
Portuguese only: Abaixo você pode entender um pouco mais sobre este projeto de memórias inteligentes (em 3 níveis de profundidade).
(2016~) Migrating Database Operations to In-Memory Processing – A large burden of processing read-mostly databases consists of moving data around the memory hierarchy rather than processing data in the processor. The data movement is penalized by the performance gap between the processor and the memory, which is the well-known problem called the memory wall. The emergence of smart memories, as the new Hybrid Memory Cube (HMC), allows mitigating the memory wall problem by executing instructions in logic chips integrated into a stack of DRAMs. These memories can enable not only in-memory databases but also to have the potential for in-memory computation of database operations.
Portuguese only: Abaixo você pode entender um pouco mais sobre este projeto de migração de banco de dados para processamento em memória.
(2016~) Expansible Network-on-Chip – Interconnection has a great importance to provide a high bandwidth communication among parallel systems. Network-on-Chip (NoC) is the current intra-chip interconnection choice. However, NoC is formed by static structures hardly allowing the inclusion or removal of elements. Targeting this issue, a new paradigm emerges the Expandable Network-on-Chip (ENoC). It allows on-the-fly interconnection of different NoCs or chips by unifying them. The inter-chip communications are performed over a high bandwidth and very limited range wireless channel. Nevertheless, connecting different systems from different manufacturers raises huge security issues.
(2012~) Cache line usage predictor – In recent years, the constant reduction on the transistor size allowed the cache memories to great growth in capacity. Nowadays, the cache memories occupy near to 50% of the processors’ chip area, this increase was also driven by the memory wall and dark silicon issues. However, this capacity growth influences the energy consumption to maintain the data and operate over such big cache memories. This makes the energy consumed by the caches an important study area. There are many existing methods to save the energy consumed by these caches.