Join GitHub today
Working with float32 data #5776
I was wondreing why
I'm working with rather large dataset, which fits into my RAM as float32, but when I'm trying to train a simple SGD on it, the model tries to copy my data into float64, causing MemoryError.
I can change this in my local sklearn build, but I guess there is a good reason why this is not a free parameter?
PR welcome ;)
One of the reasons not to do that was the explosion in the generated C code, but I think with #5492 we need to be somewhat less careful about that.