Skip to content
acasajus edited this page Nov 27, 2012 · 4 revisions

The DIRAC release v6r5 has several new features that are very useful although stay a bit experimental still. However, the release is fully compatible with the old functionality. So, upgrading to v6r5 does not have a risk of using new unstable stuff because it is not used by default adn must be activated through new components installation and CS settings.

In the following the new features in v6r5 are briefly described. The release notes with the list of all the changes can be found in the usual place

Executors

The new Executors framework is introduced in v6r5. Executor is a new type component in the DIRAC framework along with Services and Agents. Executors can be seen as light weight services getting small tasks for immediate execution through a message passing mechanism from a central dispatcher called ExecutorMind ( or classes deriving from it ). Their primary purpose in DIRAC Workload Management system is to replace the Optimizers mechanism for inspecting jobs prior to their insertion into the Task Queue. Executors allow to speed up this process due to an efficient method of passing the job data from one optimizer to another without intermediate storing in the DB. Also several executor chains can be launched in parallel as necessary to cope with the WMS load.

In the certification tests with a single host DIRAC installation the use of executors allowed to run up to 1M jobs per day ( more than 500K jobs per day in a sustained way ). This is order of magnitude higher than our previous limits.

Setting up Executors

DIRAC distribution comes with 4 standard Executors: InputData, JobSanity, JobPath, JobScheduling, which correspond to the old Optimizers and have the same function. More executors and custom executors will be added later. These executors are used as plugins into the framework which should be set up as the following.

First, the OptimizationMind service should be set up and started in a usual way. Second. the Executors section must be added to the CS. An example section is shown below::

Systems
{
  WorkloadManagement
  {
     Integration
     {
        Services
        {
           OptimizationMind
           {
              Port = 9175
           }
        }
        Agents
        { ... }
        Executors
        {
           Optimizer
           {
              Load = JobPath, JobSanity, InputData, JobScheduling
           }
           JobPath
           {
              Option = Value
           }
           ...
        }
      }
    }
  }
}

In the example it is seen that a container of Executor plugins should be defined similarly to the old Optimizer definition. Each Executor plugin can also have its own options.

The Executors are installed in the similar way as other components using dirac-install-executor command line tool or SystemAdministrator console. For example::

volhcb13.cern.ch> install executor WorkloadManagement Optimizer -p Load=JobPath,JobSanity,InputData,JobScheduling

Note that Executors can not run together with the old style Optimizers. If Executors are put into production, the old Optimizers must be shut down. If it is necessary to resurrect the old Optimizers, the Executors must go down.

Migrating the JobAgent VOPlugin

The JobAgent optimizer VOPlugin option defines a module that can modify the optimization chain. In the old version the VOPlugin had to contain a class that was instanced passing as parameter a dict containing JobID, ClassAd and ConfigPath. The new version receives JobState instead of ClassAd. This new object contains all required info about the job and the manifest. The equivalent of ClassAd would be the manifest. To get it, execute JobState.getManifest(). This method will return an object of type DIRAC.WorkloadManagement.Client.JobState.JobManifest. This object contains equivalent functions as ClassAd and can also return the manifest in CFG format for easier handling. For instance:

classadJob.get_expression( 'AncestorDepth' ).replace( '"', '' ).replace( 'Unknown', '' )

would be:

manifest = jobState.getManifest()
manifest.getOption( AncestorDepth', '' ).replace( 'Unknown', '' )

The expected return of the VOPlugin is the same as the old version.

BOINC Computing Element

The new type of resources is made available on the experimental basis. The BOINC technology based Desktop Grids (DG) can now be constructed within the DIRAC framework. To set up a DG, one needs to install a standard BOINC server and to define a Site with the BOINC type Computing Elements pointing to this server. On the client site one needs to install ORACLE VirtualBox virtual machine hypervisor and standard BOINC client. The BOINC client must be configured to work with the project defined in the DIRAC/BOINC server.

The principle of the DG work is that pilots are submitted to the BOINC server whcih in turn passes them to the clients together with a special application that does the following steps: - downloads a required virtual machine image ( usually CERNVM with CVMFS support ) - starts the virtual machine by passing the image to the VirtualBox - instructs the virtual machine to execute the standard DIRAC pilot The rest is done in the same way as with any other DIRAC resource.

The more detailed documentation is in preparation. The pilot BOINC service is installed as part of the France-Grilles installation where more experience with its use will be gained before offering it to other DIRAC users.

Clone this wiki locally