A young computer scientist and two colleagues show that searches within data structures called hash tables can be much faster than previously deemed possible.
I have skimmed the paper “Tiny Pointers”, it seems they use several levels of tables “Load balancing” and “overflow” to store the hashes, and they claim that they can save space/time by splitting up the hash in a part that can be computed and a unique part.
It doesn’t look real groundbreaking, more like a tool that has it’s applications in some places.
I have skimmed the paper “Tiny Pointers”, it seems they use several levels of tables “Load balancing” and “overflow” to store the hashes, and they claim that they can save space/time by splitting up the hash in a part that can be computed and a unique part.
It doesn’t look real groundbreaking, more like a tool that has it’s applications in some places.