Daniel Lubarov

The actual time complexity of array access

Suppose we have an array holding n bits of data. What is the time complexity of accessing the i'th bit? I claim that it is in \Omega(\sqrt[3]{n}).

Firstly, there is a limit to how much information we can store in a given amount of space. A recent Stanford experiment achieved an information density of around 5 kilobytes per square nanometer. But the empirical value isn't important; let's say we can achieve an information density of ρ.

Storing n bits of information, then, requires an apparatus with a volume of at least n/ρ. To minimize the average distance between the CPU and a unit of storage, we could use a spherical storage apparatus and place the CPU at its center. The volume a sphere is (4/3) \pi r^3, so with a bit of algebra we get

r = \sqrt[3]{\frac{3 n}{4 \pi \rho}} \in \Omega(\sqrt[3]{n}).

Suppose our sphere is centered at the origin. Then the average distance is

\iiint_S d(x,y,z) \, \mathrm{d}x \, \mathrm{d}y \, \mathrm{d}z,

where d is the three dimensional Euclidian distance to a point from the origin. This comes out to be (3/4) r. The integration is left as an exercise to the reader.

Information cannot travel faster than c, the speed of light, which is a constant. So access time is linear with distance, which is linear with the radius of our storage apparatus. In particular,

t \ge c d = \frac{3}{4} c r \in \Omega(r).

Since t \in \Omega(r) and r \in \Omega(\sqrt[3]{n}), it follows that

t \in \Omega(\sqrt[3]{n}).

In other words, the time complexity of array access is lower bounded by the cubic root of the array's size.