New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for ml_als with recommendations and pipelines #1578
Comments
|
We can do better with that error message, though! |
|
Thanks for the clarification on ml_recommend and the ml_als pipeline example. It's helpful to someone like me who is inexperienced with Spark ML pipelines. |
|
I am not able to get ml_fit to execute. My pipeline looks like this: My data looks like this: Source: lazy query [?? x 4] The ml_fit is spitting out this mess: Error: java.lang.IllegalArgumentException: Field "ProductID" does not exist. Traceback:
I am getting the same error on both AWS and IBM Data Platform cloud services. |
|
@campanell check the spelling of your column names e.g. |
|
@kevinykuo good to know! I wonder if you would want to consider having an S3 method in |
|
I'd like to stay away from generalizing specialized helper functions to pipeline objects, because it would require inspection of the pipeline and making assumptions on which stage to extract (e.g. the ALS routine could be in any position in the pipeline and there could be more than one), which in turn could lead to unexpected behavior. Closing this, but @campanell feel free to let me know if you run into further issues! |
|
Thanks for the instructions. So embarrassed about the typos. I should not be afraid of the Boogeyman (ie Scala error messages), but I still am. I was able to get to ml_recommend. However, I would like to know where in the pipeline, do I use ft_index_to_string to covert the product_index and user_index back to ProductId and UserId? |
|
Was able to get the values from ft_index_to string back to the recommend data frame. |
So far so good, however:
The text was updated successfully, but these errors were encountered: