Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to Spark 1.0.2 and sbt 0.13.5 #2

Merged
merged 4 commits into from Aug 24, 2014

Conversation

tsindot
Copy link
Contributor

@tsindot tsindot commented Aug 22, 2014

Upgrade to latest release version of Spark 1.0.2. All tests pass:

[info] Passed: Total 104, Failed 0, Errors 0, Passed 104
[success] Total time: 124 s, completed Aug 22, 2014 2:10:59 PM
radtech:spark-jobserver $ git status

lmk if there are any issues or concerns.

-Todd

@velvia
Copy link
Contributor

velvia commented Aug 24, 2014

Thanks, going to merge.

velvia added a commit that referenced this pull request Aug 24, 2014
Upgrade to Spark 1.0.2 and sbt 0.13.5
@velvia velvia merged commit f5e0a1f into spark-jobserver:master Aug 24, 2014
velvia pushed a commit that referenced this pull request Nov 15, 2014
velvia pushed a commit that referenced this pull request Nov 15, 2014
velvia added a commit that referenced this pull request Nov 15, 2014
Upgrade to Spark 1.0.2 and sbt 0.13.5
@velvia velvia mentioned this pull request Mar 9, 2015
noorul pushed a commit to noorul/spark-jobserver that referenced this pull request Aug 31, 2015
…ul_R0.5.1-aruba-31177-DSE-4.7.3-Changes to v0.5.1-aruba

* commit '1d0bf91b5510c313fd5b36c20eafe52b62e2965f':
  ref #31177: Bump spark cassandra connector version to 1.2.3
f1yegor added a commit to f1yegor/spark-jobserver that referenced this pull request Sep 4, 2016
# This is the 1st commit message:

# This is a combination of 3 commits.
# This is the 1st commit message:
cassandra support

# This is the commit message spark-jobserver#2:

fix config

# This is the commit message spark-jobserver#3:

timeuuid allow to filter, but current version compatible only with uuid

# This is the commit message spark-jobserver#2:

merge PR spark-jobserver#458
noorul pushed a commit that referenced this pull request Jun 10, 2019
* refactor(webapi): Add WebApi logging

No change in web api behavior

* fix(jobserver): Remove binaries only if not used

Currently, user is allowed to delete a binary even
if the binary is being used by a job. Deletion
leads to 2 cases
- Once the jobs finishes, a message is sent to
JobStatusActor which tries to persist the state to
dao but fails because saveJobInfo tries to query
binaries table for binId but since the binary is
deleted, query fails and status is not saved.
- During restart scenario, driver tries to restart
the job and needs binary. Since, binary is deleted
the job cannot be restarted.

The fix for this is to not allow deletion of
binary if an active job is using it.

Note: There were 2 options to implement the
solution
1- Adding the logic directly to deleteBinary dao
function
2- Add the logic to BinaryManager

I opted for #2 since I did not wanted to put
business logic in dao layer and pop exceptions/
messages till WebApi.

The status code for this scenario is 403
and the new response is of the following format
New response:
{
  "status": "ERROR",
  "result": "Binary is in use by job(s) <job_id>"
}

Other notable changes:
- Changed the timeouts of Delete /binaries endpoint.
Previously the timeout was 3 seconds (default) for the
request and BinaryManager timeout was 60 seconds.
- Now Delete /binaries also handles the BinaryDeletionFailure
message and instead of exposing whole stacktrace, returns
a meaningful message.
- The new dao function is not implemented for C* dao
and FileDao

Change-Id: I7183115d5d4157dea62b39f1bfb2245d03d903b7
noorul pushed a commit that referenced this pull request Aug 5, 2019
* refactor(jobserver): remove code duplication

Change-Id: I7a32e2413fe2d887117ff79eb6135a108efa0580

* fix(jobserver): Only respond if cleanup is complete

Recently, binary in use feature was introduced in
jobserver which is dependent on the state of job.

Also, the current flow of deleting a context is as
follows
delete request -> stop context -> send response to
user -> clean jobs in JobStatusActor/Terminated event,
i.e. jobs are cleaned in parallel to sending response
back to user.

A combination of above 2 facts, leads to problem in
scenarios where DELETE /context and DELETE /binary
are submitted in succession and DELETE /binary
leads to 403 forbidden since job is in non-final
state.

This change improves the flow to
delete request -> stop context -> clean jobs in
JobStatusActor -> send response to user

Note:
1- Terminated is sent only after postStop has
completed.
2- If contexts are killed through SparkUI, we
don't change the flow because there is no
interaction with user through API. Normal flow
is used and it doesn't matter, if state is cleaned
a little later.
3- Flow for adhoc contexts stop is a little
different and requires no change. If the context is
killed through UI, then flow defined in #2 is used.
If adhoc job finishes normally, the JobFinished
is already sent to JobStatusActor before stopping
the context and it involves no API call. If
DELETE API calls are used for context the new
flow if followed.

Change-Id: Iad2c393ab871b5ac2313aedfc28cf35a381cabcc
SrivigneshM pushed a commit to SrivigneshM/spark-jobserver that referenced this pull request Oct 5, 2019
…er#1209)

* refactor(webapi): Add WebApi logging

No change in web api behavior

* fix(jobserver): Remove binaries only if not used

Currently, user is allowed to delete a binary even
if the binary is being used by a job. Deletion
leads to 2 cases
- Once the jobs finishes, a message is sent to
JobStatusActor which tries to persist the state to
dao but fails because saveJobInfo tries to query
binaries table for binId but since the binary is
deleted, query fails and status is not saved.
- During restart scenario, driver tries to restart
the job and needs binary. Since, binary is deleted
the job cannot be restarted.

The fix for this is to not allow deletion of
binary if an active job is using it.

Note: There were 2 options to implement the
solution
1- Adding the logic directly to deleteBinary dao
function
2- Add the logic to BinaryManager

I opted for spark-jobserver#2 since I did not wanted to put
business logic in dao layer and pop exceptions/
messages till WebApi.

The status code for this scenario is 403
and the new response is of the following format
New response:
{
  "status": "ERROR",
  "result": "Binary is in use by job(s) <job_id>"
}

Other notable changes:
- Changed the timeouts of Delete /binaries endpoint.
Previously the timeout was 3 seconds (default) for the
request and BinaryManager timeout was 60 seconds.
- Now Delete /binaries also handles the BinaryDeletionFailure
message and instead of exposing whole stacktrace, returns
a meaningful message.
- The new dao function is not implemented for C* dao
and FileDao

Change-Id: I7183115d5d4157dea62b39f1bfb2245d03d903b7
SrivigneshM pushed a commit to SrivigneshM/spark-jobserver that referenced this pull request Oct 5, 2019
* refactor(jobserver): remove code duplication

Change-Id: I7a32e2413fe2d887117ff79eb6135a108efa0580

* fix(jobserver): Only respond if cleanup is complete

Recently, binary in use feature was introduced in
jobserver which is dependent on the state of job.

Also, the current flow of deleting a context is as
follows
delete request -> stop context -> send response to
user -> clean jobs in JobStatusActor/Terminated event,
i.e. jobs are cleaned in parallel to sending response
back to user.

A combination of above 2 facts, leads to problem in
scenarios where DELETE /context and DELETE /binary
are submitted in succession and DELETE /binary
leads to 403 forbidden since job is in non-final
state.

This change improves the flow to
delete request -> stop context -> clean jobs in
JobStatusActor -> send response to user

Note:
1- Terminated is sent only after postStop has
completed.
2- If contexts are killed through SparkUI, we
don't change the flow because there is no
interaction with user through API. Normal flow
is used and it doesn't matter, if state is cleaned
a little later.
3- Flow for adhoc contexts stop is a little
different and requires no change. If the context is
killed through UI, then flow defined in spark-jobserver#2 is used.
If adhoc job finishes normally, the JobFinished
is already sent to JobStatusActor before stopping
the context and it involves no API call. If
DELETE API calls are used for context the new
flow if followed.

Change-Id: Iad2c393ab871b5ac2313aedfc28cf35a381cabcc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants