Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
15 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion deploy/kubernetes/dolphinscheduler/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,6 @@ Please refer to the [Quick Start in Kubernetes](../../../docs/docs/en/guide/inst
| common.sharedStoragePersistence.storage | string | `"20Gi"` | `PersistentVolumeClaim` size |
| common.sharedStoragePersistence.storageClassName | string | `"-"` | Shared Storage persistent volume storage class, must support the access mode: ReadWriteMany |
| conf.auto | bool | `false` | auto restart, if true, all components will be restarted automatically after the common configuration is updated. if false, you need to restart the components manually. default is false |
| conf.common."alert.rpc.port" | int | `50052` | rpc port |
| conf.common."appId.collect" | string | `"log"` | way to collect applicationId: log, aop |
| conf.common."aws.credentials.provider.type" | string | `"AWSStaticCredentialsProvider"` | |
| conf.common."aws.s3.access.key.id" | string | `"minioadmin"` | The AWS access key. if resource.storage.type=S3, and credentials.provider.type is AWSStaticCredentialsProvider. This configuration is required |
Expand Down
3 changes: 0 additions & 3 deletions deploy/kubernetes/dolphinscheduler/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -345,9 +345,6 @@ conf:
# -- development state
development.state: false

# -- rpc port
alert.rpc.port: 50052

# -- set path of conda.sh
conda.path: /opt/anaconda3/etc/profile.d/conda.sh

Expand Down
8 changes: 8 additions & 0 deletions docs/configs/docsdev.js
Original file line number Diff line number Diff line change
Expand Up @@ -258,6 +258,10 @@ export default {
title: 'File Parameter',
link: '/en-us/docs/dev/user_doc/guide/parameter/file-parameter.html',
},
{
title: 'StartUp Parameter',
link: '/en-us/docs/dev/user_doc/guide/parameter/startup-parameter.html',
},
],
},
{
Expand Down Expand Up @@ -969,6 +973,10 @@ export default {
title: '文件参数传递',
link: '/zh-cn/docs/dev/user_doc/guide/parameter/file-parameter.html',
},
{
title: '启动参数',
link: '/zh-cn/docs/dev/user_doc/guide/parameter/startup-parameter.html',
},
],
},
{
Expand Down
1 change: 0 additions & 1 deletion docs/docs/en/architecture/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,6 @@ The default configuration is as follows:
| datasource.encryption.salt | !@#$%^&* | the salt of the datasource encryption |
| support.hive.oneSession | false | specify whether hive SQL is executed in the same session |
| sudo.enable | true | whether to enable sudo |
| alert.rpc.port | 50052 | the RPC port of Alert Server |
| zeppelin.rest.url | http://localhost:8080 | the RESTful API url of zeppelin |
| appId.collect | log | way to collect applicationId, if use aop, alter the configuration from log to aop, annotation of applicationId auto collection related configuration in `bin/env/dolphinscheduler_env.sh` should be removed. Note: Aop way doesn't support submitting yarn job on remote host by client mode like Beeline, and will failure if override applicationId collection-related environment configuration in dolphinscheduler_env.sh, and . |

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/en/architecture/design.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ In the early schedule design, if there is no priority design and use the fair sc
- For details, please refer to the logback configuration of Master and Worker, as shown in the following example:

```xml
<conversionRule conversionWord="message" converterClass="org.apache.dolphinscheduler.common.log.SensitiveDataConverter"/>
<conversionRule conversionWord="message" converterClass="org.apache.dolphinscheduler.plugin.task.api.log.SensitiveDataConverter"/>
<appender name="TASKLOGFILE" class="ch.qos.logback.classic.sift.SiftingAppender">
<filter class="org.apache.dolphinscheduler.plugin.task.api.log.TaskLogFilter"/>
<Discriminator class="org.apache.dolphinscheduler.plugin.task.api.log.TaskLogDiscriminator">
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/en/guide/installation/pseudo-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The purpose of the pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, API server, are all on the same machine.

If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow pseudo-cluster deployment. If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).

## Preparation

Expand Down
4 changes: 3 additions & 1 deletion docs/docs/en/guide/security/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@ Administrator login, default username/password: admin/dolphinscheduler123
- Tenant Code: **The tenant code is the user on Linux, unique and cannot be repeated**
- The administrator enters the `Security Center->Tenant Management` page, and clicks the `Create Tenant` button to create a tenant.

> Note: Currently, only admin users can modify tenant.
> Note:
> 1. Currently, only admin users can modify tenant.
> 2. If you create a tenant manually in the Linux, you need to add the manually created tenant to the dolphinscheduler bootstrap user's group, so that the tenant will have enough working directory permissions.

![create-tenant](../../../../img/new_ui/dev/security/create-tenant.png)

Expand Down
1 change: 0 additions & 1 deletion docs/docs/zh/architecture/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,6 @@ common.properties配置文件目前主要是配置hadoop/s3/yarn/applicationId
| datasource.encryption.salt | !@#$%^&* | datasource加密使用的salt |
| support.hive.oneSession | false | 设置hive SQL是否在同一个session中执行 |
| sudo.enable | true | 是否开启sudo |
| alert.rpc.port | 50052 | Alert Server的RPC端口 |
| zeppelin.rest.url | http://localhost:8080 | zeppelin RESTful API 接口地址 |
| appId.collect | log | 收集applicationId方式, 如果用aop方法,将配置log替换为aop,并将`bin/env/dolphinscheduler_env.sh`自动收集applicationId相关环境变量配置的注释取消掉,注意:aop不支持远程主机提交yarn作业的方式比如Beeline客户端提交,且如果用户环境覆盖了dolphinscheduler_env.sh收集applicationId相关环境变量配置,aop方法会失效 |

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/zh/architecture/design.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@
- 详情可参考Master和Worker的logback配置,如下示例:

```xml
<conversionRule conversionWord="message" converterClass="org.apache.dolphinscheduler.common.log.SensitiveDataConverter"/>
<conversionRule conversionWord="message" converterClass="org.apache.dolphinscheduler.plugin.task.api.log.SensitiveDataConverter"/>
<appender name="TASKLOGFILE" class="ch.qos.logback.classic.sift.SiftingAppender">
<filter class="org.apache.dolphinscheduler.plugin.task.api.log.TaskLogFilter"/>
<Discriminator class="org.apache.dolphinscheduler.plugin.task.api.log.TaskLogDiscriminator">
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/zh/guide/installation/pseudo-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

伪集群部署目的是在单台机器部署 DolphinScheduler 服务,该模式下 master、worker、api server 都在同一台机器上

如果你是新手,想要体验 DolphinScheduler 的功能,推荐使用[Standalone](standalone.md)方式体检。如果你想体验更完整的功能,或者更大的任务量,推荐使用[伪集群部署](pseudo-cluster.md)。如果你是在生产中使用,推荐使用[集群部署](cluster.md)或者[kubernetes](kubernetes.md)
如果你是新手,想要体验 DolphinScheduler 的功能,推荐使用[Standalone](standalone.md)方式体检。如果你想体验更完整的功能,或者更大的任务量,推荐使用伪集群部署。如果你是在生产中使用,推荐使用[集群部署](cluster.md)或者[kubernetes](kubernetes.md)

## 前置准备工作

Expand Down
4 changes: 3 additions & 1 deletion docs/docs/zh/guide/security/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,9 @@
- 租户编码:**租户编码是 Linux上 的用户,唯一,不能重复**
- 管理员进入安全中心->租户管理页面,点击“创建租户”按钮,创建租户。

> 注意:目前仅有 admin 用户可以修改租户。
> 注意:
> 1. 目前仅有 admin 用户可以修改租户;
> 2. 如果您在 Linux 中手动创建一个租户,则需要将手动创建的租户添加到 dolphinscheduler 启动用户组,以便该租户拥有足够的工作目录权限。

![create-tenant](../../../../img/new_ui/dev/security/create-tenant.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ public AlertServerHeartBeat getHeartBeat() {
.cpuUsage(systemMetrics.getSystemCpuUsagePercentage())
.memoryUsage(systemMetrics.getSystemMemoryUsedPercentage())
.jvmMemoryUsage(systemMetrics.getJvmMemoryUsedPercentage())
.diskUsage(systemMetrics.getDiskUsedPercentage())
.serverStatus(ServerStatus.NORMAL)
.isActive(alertHAServer.isActive())
.host(NetUtils.getHost())
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,6 @@ datasource.encryption.enable=false
# datasource encryption salt
datasource.encryption.salt=!@#$%^&*

# Network IP gets priority, default inner outer

# Whether hive SQL is executed in the same session
support.hive.oneSession=false

Expand All @@ -98,15 +96,9 @@ sudo.enable=true
# network IP gets priority, default: inner outer
#dolphin.scheduler.network.priority.strategy=default

# system env path
#dolphinscheduler.env.path=dolphinscheduler_env.sh

# development state
development.state=false

# rpc port
alert.rpc.port=50052

# set path of conda.sh
conda.path=/opt/anaconda3/etc/profile.d/conda.sh

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ public class MonitorController extends BaseController {
@ResponseStatus(HttpStatus.OK)
@ApiException(LIST_MASTERS_ERROR)
public Result<List<Server>> listServer(@PathVariable("nodeType") RegistryNodeType nodeType) {
List<Server> servers = monitorService.listServer(nodeType);
final List<Server> servers = monitorService.listServer(nodeType);
return Result.success(servers);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -96,8 +96,10 @@ public boolean preHandle(HttpServletRequest request, HttpServletResponse respons
}

@Override
public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler,
ModelAndView modelAndView) throws Exception {
public void postHandle(HttpServletRequest request,
HttpServletResponse response,
Object handler,
ModelAndView modelAndView) {
ThreadLocalContext.getTimezoneThreadLocal().remove();

int code = response.getStatus();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,11 @@
import org.apache.dolphinscheduler.api.service.UsersService;
import org.apache.dolphinscheduler.api.service.WorkflowDefinitionService;
import org.apache.dolphinscheduler.common.constants.Constants;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.ComplementDependentMode;
import org.apache.dolphinscheduler.common.enums.ExecutionOrder;
import org.apache.dolphinscheduler.common.enums.FailureStrategy;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.Priority;
import org.apache.dolphinscheduler.common.enums.ReleaseState;
import org.apache.dolphinscheduler.common.enums.RunMode;
Expand Down Expand Up @@ -370,11 +372,9 @@ private void createOrUpdateSchedule(User user,
public void execWorkflowInstance(String userName,
String projectName,
String workflowName,
String cronTime,
String workerGroup,
String warningType,
Integer warningGroupId,
Integer timeout) {
Integer warningGroupId) {
User user = usersService.queryUser(userName);
Project project = projectMapper.queryByName(projectName);
WorkflowDefinition workflowDefinition =
Expand All @@ -389,6 +389,10 @@ public void execWorkflowInstance(String userName,
.workerGroup(workerGroup)
.warningType(WarningType.of(warningType))
.warningGroupId(warningGroupId)
.execType(CommandType.START_PROCESS)
.taskDependType(TaskDependType.TASK_POST)
.dryRun(Flag.NO)
.testFlag(Flag.NO)
.build();
executorService.triggerWorkflowDefinition(workflowTriggerRequest);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -119,10 +119,14 @@ public User getAuthUser(HttpServletRequest request) {
sessionId = cookie.getValue();
}
}
Session session = sessionService.getSession(sessionId);
final Session session = sessionService.getSession(sessionId);
if (session == null) {
return null;
}
if (sessionService.isSessionExpire(session)) {
sessionService.expireSession(session.getUserId());
return null;
}
// get user object from session
return userService.queryUser(session.getUserId());
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,12 @@
import org.apache.dolphinscheduler.api.security.impl.AbstractAuthenticator;
import org.apache.dolphinscheduler.dao.entity.User;

import lombok.NonNull;

public class PasswordAuthenticator extends AbstractAuthenticator {

@Override
public User login(String userName, String password) {
public User login(@NonNull String userName, String password) {
return userService.queryUser(userName, password);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -706,6 +706,7 @@ private void doOnlineScheduler(Schedule schedule) {
}

schedule.setReleaseState(ReleaseState.ONLINE);
schedule.setUpdateTime(new Date());
scheduleMapper.updateById(schedule);

Project project = projectMapper.queryByCode(workflowDefinition.getProjectCode());
Expand Down Expand Up @@ -735,6 +736,7 @@ private void doOfflineScheduler(Schedule schedule) {
log.debug("The schedule is already offline, scheduleId:{}.", schedule.getId());
return;
}
schedule.setUpdateTime(new Date());
schedule.setReleaseState(ReleaseState.OFFLINE);
scheduleMapper.updateById(schedule);
WorkflowDefinition workflowDefinition =
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ public void expireSession(Integer userId) {

@Override
public boolean isSessionExpire(Session session) {
return System.currentTimeMillis() - session.getLastLoginTime().getTime() <= Constants.SESSION_TIME_OUT * 1000;
return System.currentTimeMillis() - session.getLastLoginTime().getTime() >= Constants.SESSION_TIME_OUT * 1000;
}

}
Loading