Skip to content

Refactor to centralize Job Specification properties for reusability across ingestion jobs #14272

@chrajeshbabu

Description

@chrajeshbabu

Currently some of the the job specification properties are defined at each implementation of batch ingestion like

Hadoop job runner has defined the job spec constants in it's corresponding implementation:

public static final String SEGMENT_GENERATION_JOB_SPEC = "segmentGenerationJobSpec";

// Field names in job spec's executionFrameworkSpec/extraConfigs section
private static final String DEPS_JAR_DIR_FIELD = "dependencyJarDir";
private static final String STAGING_DIR_FIELD = "stagingDir";

Similarly spark ingestion runner also defined the same constants in its own classes

private static final Logger LOGGER = LoggerFactory.getLogger(SparkSegmentGenerationJobRunner.class);
private static final String DEPS_JAR_DIR = "dependencyJarDir";
private static final String STAGING_DIR = "stagingDir";

Better to centralize them at common place and reuse to avoid deviations in the future.

Metadata

Metadata

Assignees

No one assigned

    Labels

    cleanupCode cleanup or removal of dead codegood first issueGood for newcomers — GitHub surfaces this in the Contribute tabingestionRelated to data ingestion pipeline

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions