-
Notifications
You must be signed in to change notification settings - Fork 918
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logistic Regression with LBFGS in Spark 1.6 and 2.1 #7
Comments
Hi @colbec, Sorry about the late reply, and thanks for having a look at the repos. I'm afraid I didn't have the time to update the repos and user a more recent version of Spark, so I can't reproduce your results. They look ver interesting indeed! Hopefully somebody else will be able to do it and will let us know :) But please, keep us up to date with your findings. |
Hi @jadianes No problem, Sir - unfortunately I have not been able to show any merit in the results I get with the Spark community. My post with Apache JIRA mentioned above has been dismissed as "not a problem." The result is that I for one will be using 1.6.2 for Logistic Regression since (for me, anyway) it is 3 times faster than 2.1.0 and produces a more attractive result. |
For those following along, I have found some discussions related to mllib (http://apache-spark-developers-list.1001551.n3.nabble.com/Switch-RDD-based-MLlib-APIs-to-maintenance-mode-in-Spark-2-0-td17033.html) which indicate that from the Spark 2.x onwards, there is to be a preference for the newer "ml" library over the "mllib" which will be going to maintenance only mode. With much legacy tutorial support out in the wild using mllib there will inevitably be a period of transition where we just have to be a bit careful. |
interactions_df = sqlContext.createDataFrame(row_data) when i run the above in pyspark shell , I get the below error.Not able to figure out the issue here. Caused by: ERROR XJ040: Failed to start database 'metastore_db' with class loade |
@jadianes Nice tutorial on Logistic Regression, thankyou.
I ran the tutorial on Spark 1.6.2 and 2.1.0 - both ran fine and I could repeat your results perfectly in 1.6.2, but I would like to offer the following observation re 2.1.0. In 2.1.0 the process takes about 3 times longer to run and produces a different answer than that produced by 1.6.2. I thought this was strange and found that in the list of Spark tasks 2.1.0 was calling a non-LBFGS algorithm. I raised this issue in a JIRA question (https://issues.apache.org/jira/browse/SPARK-16768). It seems that even though a user can import the LBFGS version into pyspark and you can call help on it and actually call it, I don't think it is actually an LBFGS version.
http://spark.apache.org/docs/latest/mllib-optimization.html has some other information on LBFGS in Spark.
Later when 2.1.0 becomes the standard your readers may find that they don't get your results for accuracy. Or maybe I just missed something, can anyone confirm my observations?
The text was updated successfully, but these errors were encountered: