Skip to content

afeller08/complexity-theory-stuff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

Some python code where I work on coming up with asymptotically fast algorithms, and some C code that is designed to be just plain fast.

The python code is generally very inefficient and not meant to be run. I use python because it's less ambiguous than pseudocode, it contains libraries that I can import and reference non-ambiguously, and it is almost as readable as pseudocode.

I will generally assume that imported libraries run in polynomial time, but if I need something to run faster, I will write an implementation that clearly has my desired runtime.

Also, I will occasionally reperform optimizations for clarity. For instance, I know that CPython implements len efficiently, as well as things like string concatenation, but I may from time to time rewrite these things explicitly if an asymmptotically inefficient implementation would invalidate the correctness of my claims about efficiency.

Finally, it is generally my policy when discussing complexity theory to be as anal as possible in my attentativeness to details. For example, people regularly claim that it is possible to lookup hashmaps in constant time. (I.e. That we can ignore the size of the database.) Such assertions are patently absurd because the lookup takes at least linear time with respect to the size of the input. Since there are only 2 ** n strings of length n, the number of entries in the database grows at most exponentially with respect to the length of the keys. Thus we hae a hard lowerbound of log(n) time for lookup speeds for deindexing the information. The fact that key lengths typically grow much faster than log(n), does not negate the fact that the algorithm takes at least log(n) time; rather it says that, generally speaking, the algorithms take much longer than log(n) time.

If I say that something can be done in constant time, I will always mean that it can be done without reading the entire input. If I am talking about a problem that depends on X and Y, and I specify that it can be done with some efficiency with respect to X, that there is no inherent relationship between the length of X and the length of Y. For example, you can talk about division for a fixed denominator size. There are plenty of scenarios where you would have denominators constrained to twenty digits or less for which you can have arbitrarily large numerators. The statement, "Division can be performed in linear time with respect to the length of the numerator" is true and worth mentioning. You can memoize on the denominator. Completing the memoization requires exponential time with respect to the length of the denominator which we can think of as preprocessing. There are other problems for which we don't know whether we can achieve linear efficiency for one the first term with respect to the second. For an obviously similar problem, we don't know whether we can memoize to get linear time efficiency with respect to a for the algorithm has_factor_smaller_than(a, b): an alorithm to determine whether there exists any number smaller than b that divides a.

Whenever I make a statement about the possibility of computing an algorithm over X and Y with respect to the length of the term in X. I will also explain what that algorithmic complixity is with respect to the term in X and the term in Y. In the above example, I clarified that algorithm I was thinking of for achieving division in linear time with respect to the length of the numerator requires exponential time with respect to the length of the denominator.

These examples are intentionally trivial. They are not meant to provide deep insights. They are meant to obviate my claim that we frequently overlook important statements about complexity theory by being insufficiently anal.

About

nothing to see here

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages