- 
                Notifications
    
You must be signed in to change notification settings  - Fork 3.2k
 
Description
Is your feature request related to a problem? Please describe.
Currently, we're able to deploy Python ML models with distributed batch scoring using the ParallelRunStep, and then we can call that deployed pipeline from Data Factory.  Some of the data scientists that I work with model in R.  ParallelRunStep can only run Python scripts and not R scripts.  Due to this, we're looking at other deployment alternatives, but it would be great if we could use Python to build a pipeline but have the scoring code be R.
Describe the solution you'd like
Currently, we build ML pipelines in ML Studio using the Python SDK and the ParallelRunStep.  We still want to use the Python SDK to build our pipelines, but it would be great if there was a way to run an R script from the ParallelRunStep instead of only Python scripts.
Describe alternatives you've considered
Azure Batch.  Azure Databricks + SparkR.  Azure Databricks + Sparklyr.
Additional context
For e.g., it would be great if the entry_script in ParallelRunConfig could be an R script if we wanted.  Then the R batch scoring script would follow the same format as a Python scoring script (init function, run function, etc.).  In addition, it would be great if we could specify the version of R as well.
env.r_version = '3.4.3'
parallel_run_config = ParallelRunConfig(
    environment=env,
    entry_script="batch_scoring.R",
    source_directory=".",
    output_action="append_row",
    mini_batch_size="20",
    error_threshold=1,
    compute_target=compute_target,
    process_count_per_node=2,
    node_count=1
)