I just don't know why the latency just increases by 1.3X since 1999. I suppose that the DRAM frequency increase from 133Mhz, for example, to 1600Mhz, that is 10x latency reduction assuming the same number of clocks fetching a data from DRAM? Anyone can help me?
Hi, as I know the latency actually means time from command to result. For example, you say to a dog a command, and it does it. The time that was spended is latency. A logic in a circuit have a minimal time to switch. We can reduce it with redesigning a circuitry, but there's always an edge limitation. A better explanation you can find in a book Harris and Harris "Digital Design and Computer Architecture"
Dear Professor Omar Mutlu, please add Spanish subtitles to all your master classes to contribute to the engineering community of Latin America. Cheers
i think that mission should be made by you, my friend
I just don't know why the latency just increases by 1.3X since 1999. I suppose that the DRAM frequency increase from 133Mhz, for example, to 1600Mhz, that is 10x latency reduction assuming the same number of clocks fetching a data from DRAM? Anyone can help me?
Hi, as I know the latency actually means time from command to result.
For example, you say to a dog a command, and it does it. The time that was spended is latency.
A logic in a circuit have a minimal time to switch. We can reduce it with redesigning a circuitry, but there's always an edge limitation.
A better explanation you can find in a book Harris and Harris "Digital Design and Computer Architecture"