A recent post benchmarked the actual running time (not number of instructions) for a number of algorithms and found that the lack of accounting for the effects of caching made the graphs lie. It seems most accurate to multiply your algorithm's classic running time by the square root of N.
Thursday, August 25, 2016
Most services on the internet work by having a lot of servers owned by the same group of people running software that receives input from untrusted clients and use it to craft output for them. In this classic client-server network model, the trust ends at the perimeter of the server farm. You trust your own systems, and validate everything from clients against your own state. This type of model is starting to reach growing pains as Moore's law breaks down. As the speed of CPU development slows we see a loss of the ability of the most powerful computers to outpace the request rate of hundreds of thousands of cheaper consumer devices had by clients.