r58 - 09 Mar 2017 - 13:19:47 - AlvaroFernandezYou are here: TWiki >  Atlas Web  >  GridComputing > AtlasDataProcessingAtIFIC

ATLAS Data Processing @ IFIC




I.- Using the Computing and Grid Resources at IFIC

1. New procedure for requesting access and Grid certificates

Since March 2017 the procedure has changed, because the previous Spanish Certification Authority is no longer active. Now certificates for IFIC and other Spanish e-infrastructures are avaliable using the GÉANT (TCS - Trusted Certificate Service), currently provided by DIGICERT.

Please follow the new procedure detailed here: https://twiki.ific.uv.es/twiki/bin/view/ECiencia/GridAccessProcedure

2. Installing your Personal Certificate in your computer

Once you have obtained your Personal Certificate, and want to use it with globus in order to access the Grid Computing Resources, you have to install it on your computer on the specific directory ~/.globus. Your certificate consists of two parts. A public key and a private key. It is very important that you save the private key with the adequate permissions to avoid access to it from other persons. Remember that for more security it is coded with the AFS password that you had when you applied for it. To install your certificate on your computer, complete the following instructions :

1- Backup your certificate from your browser to a temporal directory, let us say ~myusername/temp/, as follows (this example is done with Mozilla Firefox) :

Select in your browser Edit -> Preferences -> Advanced -> View Certificates
Select your new certificate and click on Backup
Save your certificate with a name you choose (for example "MyCertificate") into a directory of your choice, for example ~myusername/temp/. You will be asked for the password of your certificate.

2- Once you have your certificate "MyCertificate.p12" in the p12 format in your ~myusername/temp/ directory, login into a User Interface machine and execute the following script (you can do it on your PC if you have AFS as well) , from the ~myusername/temp/ directory, then follow the instructions (note: type Mycertificate without the .p12 extension) :

myhost:~/temp> ~sanchezj/public/p12toglobus.sh MyCertificate

This will OVERWRITE the files existing in your ~/.globus directory

3- Be sure that all has been done ok, then backup your MyCertificate.p12 file in a safe place and delete it from the ~myusername/temp/ directory.

If you need more help please visit the following page of PkIRISGrid?


3. Renewing your Personal Certificate

4. Joining the Atlas Virtual Organization

Now you need to join the ATLAS Virtual Organisation (VO). Being a member of a VO is like having Visas in your passport that say what you can do (known as Authorisation). To join the LCG you need to follow the ATLAS VO Registration procedure. When asked, select only the following groups and NO ROLES (unless you really know what you're asking for):

  • /atlas
  • /atlas/es
  • /atlas/lcg1

These groups are enough for full data access and job handling within the ATLAS VO as a normal user.

IMPORTANT NOTE: Since May 31st 2010, user datasets will be required to be begin with "user.nickname", where nickname is the nickname attribute in the atlas VO extension of your grid certificate. Please verify that you have a nickname set for your certificate.


5. Re-signing your Atlas-VO membership

When your VO atlas membership will expire, you will receive an Automatic Notification e-mail from ATLAS VOMRS asking you to re-sign the Grid and VO AUPs in order to renew your VO membership. To do it, you will be redirected to the Atlas VO Registration page of the VOMRS Atlas Service to re-sign. Read and Sign.


6. Register through the ATLAS VOMS admin service

CERN requires everyone who is part of ATLAS to be registered with CERN HR. If you aren't already, you need to do so. To register with CERN, please follow the steps explained here.

Access to ATLAS grid resources is granted by joining the ATLAS VO. This is a three-step process that is accomplished entirely via the web.

  • Be sure to use the same browser you imported your certificate in, since registration page will require your certificate to authenticate you.
  • Read the LCG Usage Rules.
  • Go to the ATLAS VOMS server. You may need to add an exception for the site to your browser's security settings in order to trust the site's certificate or teach your browser to trust the CERN authority. Note that your personal certificate DN and Certificate Authority are noted in red at the lower left of the window. If it is not, then you need to load your certificate into your browser before proceeding.
  • Fill out the requested information, in particular the adress and phone number at your institution and the email address known as your primary address for your CERN account.
  • Read and sign the VO AUP
  • Click submit, you should receive an email with a confirmation link to confirm this request



II.- The Atlas Distributed Data Management (2011)

To run Atlas Data Analysis we have first to identify the dataset that we want to use. In order to search datasets on the grid storage resources, we can use one of the web tools provided by the Atlas collaboration for this purpose. We also can use directly the DQ2 Clients.

First of all, it is recommended that we browse over the web page provided by IFIC to check that the Datasets we are looking for are not available at the Spanish Tier-2 Federation storages. We can also access directly the Storage Element at IFIC by using the ls and/or the cp linux commands to list or copy data that are located in the directories under the following spacetokens :

/lustre/ific.uv.es/grid/atlas/atlasdatadisk
/lustre/ific.uv.es/grid/atlas/atlasproddisk
/lustre/ific.uv.es/grid/atlas/atlasgroupdisk
/lustre/ific.uv.es/grid/atlas/atlasuserdisk
/lustre/ific.uv.es/grid/atlas/atlasmcdisk

for example by executing :

$> ls /lustre/ific.uv.es/grid/atlas/atlasproddisk/mc08/AOD/

If the datasets we are looking for are neither at the IFIC Strorage Element nor at the Spanish Tier-2 Federation storages, so we can do one of the following :


II.1.- Searching Datasets on the Grid using the Web Tools

AMI is the ATLAS Metadata Interface Portal Page, that we can use for datasets searching. A Tutorial is provided to help user going through this web, as well as a Rapid Introduction to the AMI Dataset Search Interface, in the form of a FAQ.

The Panda Monitor web site also provide a way of searching datasets. The dataset browser (very slow!) allows browsing of DQ2 datasets based on dataset metadata and site selections. Datasets searches can be done with the search form (with wildcards) or quick search (with the full name, no wildcard). The Panda Database Query Form allows a quick search for datasets.


II.2.- Searching Datasets on the Grid using the DQ2 Clients (2011)

All official existing Atlas Datasets stored over the Grid storage resources are registered on Local Catalogs located at each Tier-1 site, and also at ce Central Catalog at CERN. Searching datasets could then be done by requesting a Catalog. For the Spanish cloud the Local Catalog is located at PIC-Barcelona. To interact with the Catalogs we can use the DQ2 Clients tools provided by the Atlas Distributed Data Management Group.

The DQ2 Clients consists of groups of DQ2 Enduser tools and DQ2 Commandline utilities that all users can use to query the catalogs. While the DQ2 Enduser tools can be used for general and common queries, the DQ2 Commandline utilities are expected to be used for advanced uses, and some of these Commandline utilities need a Production Role priviledge to be granted to the user, for example to delete replicas of datasets. Note also that some queries (list datasets for example) can either be executed with a dq2-ls enduser tool or a dq2-list-replicas commandline utility.

In order to use these DQ2 Clients at IFIC we first have to setup the DQ2 environment variables by doing the following :

Login into a User Interface machine :

$> ssh ui00.ific.uv.es   ( or use the short command  $> ssh ui00  if you are login from IFIC )
Execute the following setup script file:
$> source $VO_ATLAS_SW_DIR/ddm/latest/setup.sh
And create a valid voms-proxy as follows, do not use the grid-proxy-init command :
$> voms-proxy-init -voms atlas

Now we are ready to use the DQ2 Clients :

The details on how to use theses DQ2 Clients, are described at the DQ2 Clients How To cern twiki page and the Atlas Distributed Data Managementweb page.


II.2.a.- The DQ2 Endusers tools :

The most used dq2 commands, known as DQ2 Enduser tools, are described below. Execute a selected command with the -h option to see how to use it; for example $> dq2-ls -h. See also the DQ2 Enduser tools at the Atlas DDM web page.

command description
dq2-ls DATASETNAME find a dataset named DATASETNAME
dq2-ls DATASETNAME* use of wildcard to find a dataset which name contain the string DATASETNAME
dq2-ls -f DATASETNAME list the files in the dataset named DATASETNAME
dq2-ls -fp DATASETNAME list the physical filenames in the dataset named DATASETNAME
dq2-ls -r DATASETNAME list the replica locations of the dataset named DATASETNAME
dq2-get DATASETNAME download a full dataset named DATASETNAME
dq2-get -f FILENAME DATASETNAME download a single file named FILENAME from a dataset named DATASETNAME
dq2-get -f FILENAME1,FILENAME2,... DATASETNAME download multiple files named FILENAME1, FILENAME2,... from a dataset named DATASETNAME
dq2-get -n NUMBEROFFILES DATASETNAME download a sample of n random files from a dataset named DATASETNAME
dq2-get -s SITE DATASETNAME download dataset named DATASETNAME from a site named SITE
dq2-put -s SOURCEDIRECTORY DATASETNAME create a dataset named DATASET from files that are on my local disk
$> dq2-ls DATASETNAME                    // find a dataset named DATASETNAME
$> dq2-ls DATASETNAME*                    // use of wildcard to find a dataset which name contain the string DATASETNAME
$> dq2-ls -f DATASETNAME                    // list the files in the dataset named DATASETNAME
$> dq2-ls -fp DATASETNAME                    // list the physical filenames in the dataset named DATASETNAME
$> dq2-ls -r DATASETNAME                    // list the replica locations of the dataset named DATASETNAME

$> dq2-get DATASETNAME                              // download a full dataset named DATASETNAME
$> dq2-get -f FILENAME DATASETNAME                              // download a single file named FILENAME from a dataset named DATASETNAME
$> dq2-get -f FILENAME1,FILENAME2,... DATASETNAME                              // download multiple files named FILENAME1, FILENAME2,... from a dataset named DATASETNAME
$> dq2-get -n NUMBEROFFILES DATASETNAME                              // download a sample of n random files from a dataset named DATASETNAME
$> dq2-get -s SITE DATASETNAME                              // download dataset named DATASETNAME from a site named SITE

$> dq2-put -s SOURCEDIRECTORY DATASETNAME                              // create a dataset named DATASET from files that are on my local disk 


II.2.b.- The DQ2 Commandline utilities :

The list of the DQ2 Commandline utilities are presented below. Their names are self explanatory. For more details about its usage, execute a selected command with the -h option; for example $> dq2-list-dataset -h :

command description
dq2-check-replica-consistency Refresh completeness information of a dataset replica
dq2-close-dataset Close a dataset
dq2-delete-datasets Delete a dataset
dq2-delete-files Delete files from a dataset
dq2-delete-replicas Delete replicas of a dataset
dq2-delete-subscription Delete a subscription to a site of a dataset
dq2-delete-subscription-container
dq2-destinations Lists the possible destination sites for subscriptions
dq2-erase Erases a dataset
dq2-freeze-dataset Freezes a dataset
dq2-get-metadata Retrieves metadata information for the dataset
dq2-get-number-files Gets the number of files in a dataset
dq2-get-replica-metadata Get metadata of a dataset replica
dq2-list-dataset List datasets
dq2-list-dataset-by-creationdate List datasets according to their creation date
dq2-list-dataset-replicas List replicas of a dataset
dq2-list-dataset-replicas-container
dq2-list-datasets-container List datasets in a container
dq2-list-dataset-site List datasets in a site
dq2-list-erased-datasets List all erased datasets
dq2-list-file-replicas List all file replicas
dq2-list-files List all files in a dataset
dq2-list-subscription List all subscriptions
dq2-list-subscription-info List subscription information for a dataset
dq2-list-subscription-site List all subscriptions for a given site
dq2-metadata List all possible metadata values
dq2-ping Checks availability of the DQ2 central services
dq2-register-container Registers a new container
dq2-register-dataset Registers a new dataset
dq2-register-datasets-container Registers new datasets in a container
dq2-register-files Register files into a dataset
dq2-register-location Register a location for a dataset
dq2-register-subscription Register a subscription for a dataset to a site
dq2-register-subscription-container
dq2-register-version Register a new version for a dataset
dq2-reset-subscription Reset all subscription for a dataset
dq2-reset-subscription-site Reset a subscription for a dataset for a site
dq2-sample Register a new dataset out of a partial copy of an existing dataset
dq2-set-metadata  
dq2-sources List all possible site sources

$> dq2-check-replica-consistency                              // Refresh completeness information of a dataset replica.
$> dq2-close-dataset                              // Close a dataset.
$> dq2-delete-datasets                              // Delete a dataset.
$> dq2-delete-files                              // Delete files from a dataset.
$> dq2-delete-replicas                              // Delete replicas of a dataset.
$> dq2-delete-subscription                              // Delete a subscription to a site of a dataset.
$> dq2-delete-subscription-container                              
$> dq2-destinations                              // Lists the possible destination sites for subscriptions.
$> dq2-erase                              // Erases a dataset.
$> dq2-freeze-dataset                              // Freezes a dataset.
$> dq2-get-metadata                              // Retrieves metadata information for the dataset.
$> dq2-get-number-files                              // Gets the number of files in a dataset.
$> dq2-get-replica-metadata                              // Get metadata of a dataset replica.
$> dq2-list-dataset                              // List datasets.
$> dq2-list-dataset-by-creationdate                              // List datasets according to their creation date.
$> dq2-list-dataset-replicas                              // List replicas of a dataset.
$> dq2-list-dataset-replicas-container                              
$> dq2-list-datasets-container                              // List datasets in a container.
$> dq2-list-dataset-site                              // List datasets in a site.
$> dq2-list-erased-datasets                              // List all erased datasets.
$> dq2-list-file-replicas                              // List all file replicas.
$> dq2-list-files                              // List all files in a dataset.
$> dq2-list-subscription                              // List all subscriptions.
$> dq2-list-subscription-info                              // List subscription information for a dataset.
$> dq2-list-subscription-site                              // List all subscriptions for a given site.
$> dq2-metadata                              // List all possible metadata values.
$> dq2-ping                              // Checks availability of the DQ2 central services.
$> dq2-register-container                              // Registers a new container.
$> dq2-register-dataset                              // Registers a new dataset.
$> dq2-register-datasets-container                              // Registers new datasets in a container.
$> dq2-register-files                              // Register files into a dataset.
$> dq2-register-location                              // Register a location for a dataset.
$> dq2-register-subscription                              // Register a subscription for a dataset to a site.
$> dq2-register-subscription-container                              
$> dq2-register-version                              // Register a new version for a dataset.
$> dq2-reset-subscription                              // Reset all subscription for a dataset.
$> dq2-reset-subscription-site                              // Reset a subscription for a dataset for a site.
$> dq2-sample                              // Register a new dataset out of a partial copy of an existing dataset.
$> dq2-set-metadata                              
$> dq2-sources                              // List all possible site sources. 


II.3.- Searching Datasets on the Grid using the RUCIO Clients (2015)

All official existing Atlas Datasets stored over the Grid storage resources are registered on Local Catalogs located at each Tier-1 site, and also at ce Central Catalog at CERN. Searching datasets could then be done by requesting a Catalog. For the Spanish cloud the Local Catalog is located at PIC-Barcelona. To interact with the Catalogs we can use the RUCIO Clients tools provided by the Atlas Distributed Data Management Group.

The RUCIO Clients consists of groups of RUCIO Enduser tools and RUCIO Commandline utilities that all users can use to query the catalogs. While the RUCIO Enduser tools can be used for general and common queries, the RUCIO Commandline utilities are expected to be used for advanced uses, and some of these Commandline utilities need a Production Role priviledge to be granted to the user, for example to delete replicas of datasets. Note also that some queries (list datasets for example) can either be executed with a rucio-ls enduser tool or a rucio-list-replicas commandline utility.

In order to use these RUCIO Clients at IFIC we first have to setup the RUCIO environment variables by doing the following :

Login into a User Interface machine :

$> ssh -Y UserName@ui00.ific.uv.es   ( or use the short command  $> ssh ui00  if you are login from IFIC )

Execute the following setup script file:

$> export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
$> source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
$> export CERN_USER=YourUserName
$> export RUCIO_ACCOUNT=$CERN_USER
$> export RUCIO_AUTH_TYPE=x509_proxy
$> localSetupRucioClients
###asetup --gcc48default 17.1.4,rucio
###source /afs/cern.ch/atlas/offline/external/GRID/ddm/rucio/rucio_testing.sh

And create a valid voms-proxy as follows, do not use the grid-proxy-init command :

$> voms-proxy-init -voms atlas

Now we are ready to use the RUCIO Clients :

The details on how to use theses RUCIO Clients, are described at the Rucio CERN cern web and the Rucio clients how to twiki page.

Contact group: atlas-comp-rucio-support@cern.ch


II.3.a.- The RUCIO Endusers tools and Commandline utilities:

The list of the RUCIO Commandline utilities are presented below. Their names are self explanatory. For more details about its usage, execute a selected command with the -h option; for example $> rucio list-files -h :

$> export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
$> source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
$> export CERN_USER=YourUserName
$> export RUCIO_ACCOUNT=$CERN_USER
$> export RUCIO_AUTH_TYPE=x509_proxy
$> localSetupRucioClients
$> voms-proxy-init -voms atlas

command description
rucio ping Ping Rucio server
rucio whoami Get information about account whose token is used
rucio list-file-replicas List file replicas
rucio list-dataset-replicas List the dataset replicas
rucio add-dataset Add dataset
rucio add-container Add container
rucio attach Attach a list of Data Identifiers (file, dataset or container) to an other Data Identifier (dataset or container)
rucio detach Detach a list of Data Identifiers (file, dataset or container) from an other Data Identifier (dataset or container)
rucio list-dids List the data identifiers matching some metadata
rucio list-parent-dids List parent data identifiers
rucio list-scopes List all available scopes
rucio close Close data identifier
rucio stat List attributes and statuses about data identifiers
rucio delete Delete data identifier
rucio list-files List data identifier contents
rucio list-content List the content of a collection
rucio upload Upload method
rucio download Download method
rucio get-metadata Get metadata for DIDs
rucio set-metadata set-metadata method
rucio delete-metadata Delete metadata
rucio list-rse-usage list-rse-usage method
rucio list-account-usage list-account-usage method
rucio list-account-limits List account limits on RSEs
rucio add-rule Add replication rule
rucio delete-rule Delete replication rule
rucio rule-info Retrieve information about a rule
rucio list-rules List replication rules
rucio update-rule Update replication rule
rucio list-rses List RSEs
rucio list-rse-attributes List the attributes of an RSE
rucio list-datasets-rse List all the datasets at a Rucio Storage Element
rucio test-server Test Server

command optional arguments
rucio -h show this help message and exit
rucio --help show this help message and exit
rucio --version show program's version number and exit
rucio --verbose Print more verbose output
rucio -v Print more verbose output
rucio -H $ADDRESS The Rucio API host
rucio --host $ADDRESS The Rucio API host
rucio --auth-host $ADDRESS The Rucio Authentication host
rucio -a $ACCOUNT rucio account to use
rucio --account $ACCOUNT rucio account to use
rucio -S $AUTH_STRATEGY Authentication strategy (userpass or x509 or ...)
rucio --auth-strategy $AUTH_STRATEGY Authentication strategy (userpass or x509 or ...)
rucio -T $TIMEOUT Set all timeout values to SECONDS
rucio --timeout $TIMEOUT Set all timeout values to SECONDS
rucio -u $USERNAME username
rucio --user $USERNAME username
rucio -pwd $PASSWORD password
rucio --password $PASSWORD password
rucio --certificate $CERTIFICATE Client certificate file
rucio --ca-certificate $CA_CERTIFICATE CA certificate to verify peer against (SSL)
usage: rucio [-h] [--version] [--verbose] [-H ADDRESS] [--auth-host ADDRESS]
             [-a ACCOUNT] [-S AUTH_STRATEGY] [-T TIMEOUT] [-u USERNAME]
             [-pwd PASSWORD] [--certificate CERTIFICATE]
             [--ca-certificate CA_CERTIFICATE]
             
             {delete-metadata,list-scopes,get-metadata,list-files,list-file-replicas,set-metadata,add-dataset,download,close,list-account-limits,list-datasets-rse,list-rses,ping,attach,list-parent-dids,rule-info,list-rse-attributes,stat,test-server,add-container,delete-rule,list-account-usage,detach,list-dids,list-content,list-dataset-replicas,add-rule,upload,list-rse-usage,update-rule,list-rules,whoami,delete}
             ...

positional arguments:
  {delete-metadata,list-scopes,get-metadata,list-files,list-file-replicas,set-metadata,add-dataset,download,close,list-account-limits,list-datasets-rse,list-rses,ping,attach,list-parent-dids,rule-info,list-rse-attributes,stat,test-server,add-container,delete-rule,list-account-usage,detach,list-dids,list-content,list-dataset-replicas,add-rule,upload,list-rse-usage,update-rule,list-rules,whoami,delete}

rucio ping                                 Ping Rucio server</pre>
rucio whoami                           Get information about account whose token is used
rucio list-file-replicas              List file replicas
rucio list-dataset-replicas        List the dataset replicas
rucio add-dataset                    Add dataset
rucio add-container                 Add container
rucio attach                         Attach a list of Data Identifiers (file, dataset or container) to an other Data Identifier (dataset or container)
rucio detach                         Detach a list of Data Identifiers (file, dataset or container) from an other Data Identifier (dataset or container)
rucio list-dids                      List the data identifiers matching some metadata
rucio list-parent-dids               List parent data identifiers
rucio list-scopes                    List all available scopes
rucio close                                  Close data identifier
rucio stat                                   List attributes and statuses about data identifiers
rucio delete                         Delete data identifier
rucio list-files                             List data identifier contents
rucio list-content                  List the content of a collection
rucio upload                        Upload method
rucio download                       Download method
rucio get-metadata                   Get metadata for DIDs
rucio set-metadata                   set-metadata method
rucio delete-metadata                Delete metadata
rucio list-rse-usage                 list-rse-usage method
rucio list-account-usage     list-account-usage method
rucio list-account-limits            List account limits on RSEs
rucio add-rule                       Add replication rule
rucio delete-rule                    Delete replication rule
rucio rule-info                      Retrieve information about a rule
rucio list-rules                     List replication rules
rucio update-rule                    Update replication rule
rucio list-rses                      List RSEs
rucio list-rse-attributes           List the attributes of an RSE
rucio list-datasets-rse              List all the datasets at a Rucio Storage Element
rucio test-server                    Test Server

optional arguments:
rucio -h                                                 show this help message and exit
rucio --help                                  show this help message and exit
rucio --version                                show program's version number and exit
rucio --verbose                     Print more verbose output
rucio -v                                          Print more verbose output
rucio -H $ADDRESS                          The Rucio API host
rucio --host $ADDRESS                    The Rucio API host                
rucio --auth-host $ADDRESS               The Rucio Authentication host
rucio -a $ACCOUNT                          Rucio account to use
rucio --account $ACCOUNT          Rucio account to use            
rucio -S $AUTH_STRATEGY                        Authentication strategy (userpass or x509 or ...)
rucio --auth-strategy $AUTH_STRATEGY   Authentication strategy (userpass or x509 or ...)                       
rucio -T $TIMEOUT                          Set all timeout values to SECONDS
rucio --timeout $TIMEOUT          Set all timeout values to SECONDS
rucio -u $USERNAME                  username
rucio --user $USERNAME                  username                    
rucio -pwd $PASSWORD                  password
rucio --password $PASSWORD               password                   
rucio --certificate $CERTIFICATE         Client certificate file                     
rucio --ca-certificate $CA_CERTIFICATE    CA certificate to verify peer against (SSL)


II.4.- Requesting Subscription of Datasets on the Grid

When datasets are found at some storage element on the Grid, we can request for a replica of it that we store on an adequate local storage element. This could be done by using the Subscription Request Form of the Panda Monitor. However, this could be not so straightforward as the user should first be registered and second may need some priviledges.


II.5.- Use of the Storage Space ( and spacetokens ) at IFIC

At the moment there is not yet storage space dedicated to Tier3-users in the Storage Element Lustre at IFIC. The below listed spacetokens, defined by the Atlas experiment collaboration, are available in Lustre and they follow the policy use defined by Atlas :

ATLASMCDISK
ATLASDATADISK
ATLASPRODDISK
ATLASGROUPDISK
ATLASSCRATCHDISK
ATLASLOCALGROUPDISK

Note: Only these spacetokens are usables. All storage spaces out of these spacetokens are not usables. So strictly avoid to use commands like edg-gridftp-mkdir and/or globus-url-copy to create new directories in Lustre and copy from/into them. See the description below on how to use these spacetokens for managing your files/data.

These spacetokens being managed by the Atlas Distributed Data Management (DDM), all datafile transfers involving these spacetokens have to use the lcg-xx and the lfc-xx command tools and the datafiles have to be registered in the Catalog.

Datafiles can be copied to ATLASSCRATCHDISK by users or jobs. Users should know that this is not a permanaent storage space so the datafiles can be deleted centrally as specified by the Atlas policy of DDM. The datafiles copied there have to be registered in the Catalog elsewhere they are deleted.

Datafiles also can be copied to ATLASLOCALGROUPDISK which is to be dedicated to Tier3-users. However, until the Tier3-infrastructure is being working and a policy of use defined (quotas...), users have to use this storage space with care. Very large datafiles stored in this space will be deleted.

The following example describes how to use files/data transfer involving the ATLASSCRATCHDISK spacetoken and the LFC Catalog ( this is also valid when using the ATLASLOCALGROUPDISK spacetoken) :

Logging in a User Interface and get a valid proxy :

$> ssh ui05 ( or ui06 )
$> voms-proxy-init -voms atlas

Configure the environment variables LFC_HOME and LFC_HOST as follows :

$> export LFC_HOME=/grid/atlas/users/myname
$> export LFC_HOST=lfcatlas.pic.es

The variable LFC_HOST is set to indicate the location of the used Catalog and the LFC_HOME is used to complete the LFN (Logical File Name) filename that is used in the Catalog to reference the file that you copy to Lustre.

The following command do this : copy the source file "myFile.txt" from my local directory [file:/`pwd`/myFile.txt] into the (destination) Storage Element Lustre, through the SRM protocol, [-d srmv2.ific.uv.es] in the spacetoken ATLASSCRATCHDISK [-s ATLASSCRATCHDISK] using the relative path/destination-filename "users/myname/myCopiedFile.txt" [-P users/myname/myCopiedFile.txt] and register it in the Catalog with the logical file name (its reference) "myFilenameInCatalog.txt" [-l lfn:myFilenameInCatalog.txt] :

$> lcg-cr --vo atlas -v -s ATLASSCRATCHDISK -P users/myname/myCopiedFile.txt -l lfn:myFilenameInCatalog.txt -d srmv2.ific.uv.es file:/`pwd`/myFile.txt

To check that the file is registered in the Catalog execute the following command :

$> lfc-ls -l

To check that the file has been copied into Lustre execute the following command :

$> ls -l /lustre/ific.uv.es/grid/atlas/atlasscratchdisk/users/myname/

To copy a registered file from Lustre into your local directory using its filename referenced in the Catalog execute the following command :

$> lcg-cp --vo atlas -v lfn:myFilenameInCatalog.txt file:/`pwd`/myFile.txt

To copy a file directly from Lustre into your local directory execute the following command :

$> cp /lustre/ific.uv.es/grid/atlas/atlasscratchdisk/users/myname/myCopiedFile.txt .

To delete a file from Lustre as well as its reference filename in the Catalog execute the following command :

$> lcg-del -a --vo atlas lfn:myFilenameInCatalog.txt

For more details on how to use DDM commands to manipulate Files and Catalog refer to the course/tutorial presented at GRID y e-CIENCIA 2008 course held at IFIC.

Additionally, if you want to know the storage space used/available for a given spacetoken, execute the following command :

$> lcg-stmd -b -e httpg://srmv2.ific.uv.es:8443/srm/managerv2 -s ATLASSCRATCHDISK



III.- The Atlas Software releases installed at IFIC

If we want to know which Atlas Software Releases are installed on the Computing Elements at IFIC, we query the Grid Information System. To do this, login into a User Interface machine, and execute the following command :

$> lcg-infosites --vo atlas tag > releases.tmp

Then edit the file releases.tmp and search for the IFIC Computing Elements, namely lcg2ce.ific.uv.es and ce01.ific.uv.es. You should see an output which looks like the following :

Name of the CE: ce01.ific.uv.es
   VO-atlas-production-12.0.6
   VO-atlas-production-12.0.7
   VO-atlas-production-12.0.8
   VO-atlas-production-14.1.0.1-i686-slc4-gcc34-opt
   VO-atlas-production-14.1.0.2-i686-slc4-gcc34-opt
   VO-atlas-production-14.1.0.3-i686-slc4-gcc34-opt
   VO-atlas-production-14.1.0.4-i686-slc4-gcc34-opt

Name of the CE: lcg2ce.ific.uv.es
   VO-atlas-production-12.0.6
   VO-atlas-production-12.0.7
   VO-atlas-production-12.0.8
   VO-atlas-production-14.1.0.1-i686-slc4-gcc34-opt
   VO-atlas-production-14.1.0.2-i686-slc4-gcc34-opt
   VO-atlas-production-14.1.0.3-i686-slc4-gcc34-opt
   VO-atlas-production-14.1.0.4-i686-slc4-gcc34-opt

The installed releases of the Atlas Software at IFIC can also be seen by visiting the site Atlas Installation Pages and selecting IFIC-LCG2 as Site Name. You can also select the release number of the Atlas Software you are being to check for installation. More posibilities are given to the visitor for search. Note that you need a valid certificate uploaded in your browser in order to access this web site.

ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase

ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/root/
ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/python/
ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/Ganga/
ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/rucio



IV.- The Data Analysis tools installed at IFIC

The official tool to perfrom the Atlas Distributed Data Analysis on the Grid is Ganga. Detailed information about Ganga can be found at its official site, as well as a user guide.

Atlas User Support is provided by the Hypernews Forum Ganga Users and Developers.



V.- Performing Atlas Data Analysis on Local Computer

V.1.- Release 16 (2011)

At IFIC, the ATLAS software is installed in AFS and can be accessed through a User Interface, so login to one of the User Interfaces

$> ssh ui00

Note: The following example uses release 16.0.2 of the Atlas software. You can find older versions here

1. Preparing your account to use the ATLAS software :

To prepare your account, first you need to setup ATLAS software's environment and general tools.

$> source /lustre/ific.uv.es/sw/atlas/local/setup.sh

To setup Athena first do:

$> cd $HOME
$> mkdir -p AthenaTestArea/16.0.2
$> export AtlasSetup=${VO_ATLAS_SW_DIR}/software/16.0.2/AtlasSetup
$> alias asetup='source $AtlasSetup/scripts/asetup.sh'

The Athena setup is performed by executing the asetup command with a list of options and arguments.

$> asetup <arg1> <arg2>,<arg3> --<option1> --<option2> <value2> --tags=<tag1>,<tag2>,<tag3>

where:

  • <arg1>, <arg2> and <arg3> are arguments, which may be separated by spaces or commas (",").
  • <option1> is an option without value (corresponding to a particular value of a boolean variable).
  • <option2> is an option with value <value2>, which cannot be defaulted (i.e. a value must be supplied).
  • --tags (which has the aliases -t or --tag) can be used to specify a space or comma-separated list of tags (corresponding to command line arguments).

Some commonly used options have single character aliases, in which case they are denoted by a single rather than a double dash (e.g. -t instead of -tags).

Most configuration variables can be specified as arguments, options or tags, although some that have associated values can only be specified as option/value pairs. See AtlasSetup for a full explanation.

The list of available options and arguments can be viewed by specifying either of:

$> asetup -h
$> asetup --help

An simple example of asetup usage at IFIC is the following:

$> asetup 16.0.2 --testarea=$HOME/AthenaTestArea --svnroot=svn+ssh://myCERNuserName@svn.cern.ch/reps/atlasoff --multitest --dbrelease "<latest>"

Notes:

  1. --testarea Sets the location of the test development area.
  2. --svnroot By default the $SVNROOT environment variable is setup by these procedures automatically to be svn+ssh://svn.cern.ch/reps/atlasoff. To check software out from CERN's svn repository identification is required. IFIC user name is used by default in this process. If your CERN and IFIC user names are different you need to put your CERN user name manually like in the example above.
  3. --multitest If you are working with several releases this argument overrides the default structure for test releases, and adds a directory being the name of the release to the path specified by the testarea.
  4. --dbrelease This allows the default DBRelease release to be overridden by setting the $DBRELEASE_OVERRIDE environment variable. In the example above, "<latest>" corresponds to the most recent DBRelease being taken if multiple are installed.

2. Get and compile the User Analysis package :

ATLAS software is divided into packages, and these are managed through the use of a configuration management tool, CMT. This is used to copy ("check out") code from the main ATLAS repository and handle linking and compilation.

Note: Some tips for using CMT can be found at SoftwareDevelopmentWorkbookCmtTips.

Now let us get the User Analysis package and compile it. For that execute the following commands :

$> cd $TestArea
$> pwd
/afs/ific.uv.es/user/.../AthenaTestArea/16.0.2
$> cmt co -r UserAnalysis-00-15-04 PhysicsAnalysis/AnalysisCommon/UserAnalysis
$> cd PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt/
$> cmt config
$> source setup.sh
$> cmt make

To run the compiled software, get first your AnalysisSkeleton_topOptions.py file by doing the following :


$> cd ../run
$> get_files AnalysisSkeleton_topOptions_AutoConfig.py

Note that, in order to set the AOD data you want to process, you have to edit this file and change line 15 as follows :

jp.AthenaCommonFlags.FilesInput = [ "put_here_the_name_of_the_AOD_you_want_to_process"] 

Note: For 7 TeV? data look at /lustre/ific.uv.es/grid/atlas/atlasdatadisk/data10_7TeV

Finally run the following command :

$> athena.py AnalysisSkeleton_topOptions_AutoConfig.py
Your output file is called "AnalysisSkeleton.aan.root".



V.2.- Release 17 and higher (2015)

At IFIC, the ATLAS software is installed in AFS and can be accessed through a User Interface, so login to one of the User Interfaces

$> ssh ui03

Note: The following example uses release 17.7.0 of the Atlas software. You can find older versions here

1. Preparing your account to use the ATLAS software :

To prepare your account, you just need to setup ATLAS software's environment and general tools (now you don't need to create a specific folder).

echo "--- set up ATLAS:"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
export CERN_USER=YourUserName && kinit -5 $CERN_USER@CERN.CH

2. Get and compile the User Analysis package :

ATLAS software is divided into packages, and these are managed through the use of a configuration management tool, CMT. This is used to copy ("check out") code from the main ATLAS repository and handle linking and compilation.

Note: Some tips for using CMT can be found at SoftwareDevelopmentWorkbookCmtTips.

Now let us get the User Analysis package and compile it. For that execute the following commands :

$> ls /afs/cern.ch/atlas/software/releases/   ###Look here what releases are available

$> RELEASE=17.7.0 
$> mkdir -p AthenaTestArea/$RELEASE && cd AthenaTestArea/$RELEASE
### Here are the available tags:   https://svnweb.cern.ch/trac/atlasoff/browser/PhysicsAnalysis/AnalysisCommon/UserAnalysis/tags

$> asetup --gcc46default ${RELEASE},rel_6,64,runtime,cvmfsonly,here --testarea=$HOME/AthenaTestArea --svnroot=svn+ssh://$CERN_USER@svn.cern.ch/reps/atlasoff --multitest --dbrelease "<latest>"

$> cmt co -r UserAnalysis-00-15-12 PhysicsAnalysis/AnalysisCommon/UserAnalysis
$> cd PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt/
$> cmt config
$> source setup.sh
$> cmt make

To run the compiled software, get first your AnalysisSkeleton_topOptions.py file by doing the following :

$> cd ../run
$> get_files AnalysisSkeleton_topOptions_AutoConfig.py

Note that, in order to set the AOD data you want to process, you have to edit this file and change line 15 as follows :

jp.AthenaCommonFlags.FilesInput = [ "put_here_the_name_of_the_AOD_you_want_to_process"] 

Note: For 8 TeV data look at /lustre/ific.uv.es/grid/atlas/atlasdatadisk/data12_8TeV

Finally run the following command :

$> athena.py AnalysisSkeleton_topOptions_AutoConfig.py
Your output file is called "AnalysisSkeleton.aan.root".



VI.- Performing Atlas Distributed Data Analysis on the Grid - Ganga

The official way to perform ATLAS Data Analysis is using the Ganga or the Pathenatool. However, the Grid Resources can also be used directly by the user. In the following let us discuss how to use Ganga for Distributed Analysis in Atlas on the Grid. For a detailed information on how to use the "Ganga Command Line Interpreter (CLI)" see the Working with Ganga document. See also Introduction to GangaAtlas? slides.

For the Atlas Distributed Data Analysis using Ganga, visit this site for more details, there are some tutorials for different versions at the bottom.


VI.1.- Setting up and configuring the Ganga Environement at IFIC

When running Ganga for the first time, a .gangarc configuration file will be created in our home directory. We have then to change some configuration parameters in it, accordingly with what we need. To do this let us execute the following commands (Note that we have to login into a User Interface machine and have a valid proxy-certificate if we want to run jobs on the Grid using Ganga) :

$> ssh ui00.ific.uv.es
$> source /afs/ific.uv.es/project/atlas/software/ganga/install/etc/setup-atlas.sh
$> export GANGA_CONFIG_PATH=GangaAtlas/Atlas.ini

if you want another version write it like argument in the end:

$> source  /afs/ific.uv.es/project/atlas/software/ganga/install/etc/setup-atlas.sh 5.5.21

Create your configuration file (.gangarc)

$> ganga -g

Let us answer "yes" to the question asked by Ganga to create the .gangarc configuration file. We leave Ganga (with Ctrl-D) and use our favourite editor to edit the .gangarc file, then do the following changes corresponding to the ATLAS-IFIC environment :

In the section labelled [Athena] add the line

ATLASOutputDatasetLFC = lfcatlas.pic.es
In the section labelled [Configuration] add the line
RUNTIME_PATH = GangaAtlas:GangaPanda:GangaJEM
In the section labelled [LCG] add the line
DefaultLFC = lfcatlas.pic.es
DefaultSE = srmv2.ific.uv.es
VirtualOrganisation = atlas
In the section labelled [defaults_GridProxy] add the line
voms = atlas
In the section labelled [defaults_VomsCommand] add the line
init = voms-proxy-init -voms atlas

The variable ATLASOutputDatasetLFC catalogues your output in the PIC Cloud for ATLASOutputDataset option. The variable RUNTIME_PATH chooses the ATLAS applications. The variables DefaultLFC and DefaultSE define a catalogue and Storage Element where your input file can be save if it is bigger than 10 MBs (the input size maximum on the GRID), and GANGA can make a copy in the job site. The variables voms and init permit to GANGA create the correct grid proxy.

Once these changes done in the .gangarc configuration file, we are ready to use Ganga for the Atlas distributed data analysis.


VI.2.- Running analysis using Athena with Ganga

In the following example we will see how to run a job using the release 16.0.2 of the Atlas software and User Analysis package. If we are using a different software release, we have to make the adequate changes when executing the setup commands.

First of all, login into a User Interface machine and get a valid proxy-certificate :

$> ssh ui00.ific.uv.es
$> voms-proxy-init -voms atlas

As the Athena framework will be used, we have to configure the environment variables to take that into account doing the Athena Setup before Ganga ejecution. Athena Setup configuration variables and environment as follows (It is supposed that the "PhysicsAnalysis/AnalysisCommon/UserAnalysis" Package is installed. Visit section V to see how to install the package) :

$> RELEASE=16.0.2
$> source /lustre/ific.uv.es/sw/atlas/local/setup.sh
$> mkdir -p AthenaTestArea/${RELEASE}
$> export AtlasSetup=${VO_ATLAS_SW_DIR}/software/${RELEASE}/AtlasSetup
$> alias asetup='source $AtlasSetup/scripts/asetup.sh'
$> asetup ${RELEASE} --testarea=$HOME/AthenaTestArea --multitest --dbrelease "<latest>"

IMPORTANT: if you want to use release 17 or higher, go to Release 17

IMPORTANT: Note that Ganga should be run from the run/ directory, of the Physics Analysis Package and ganga will recognize your athena package. So, let us do the following from the run directory :

$> cd $TestArea/PhysicsAnalysis/AnalysisCommon/UserAnalysis/run

The next command allows us to use the latest version of Ganga that is installed in our environment.

$> source /afs/ific.uv.es/project/atlas/software/ganga/install/etc/setup-atlas.sh
$> export GANGA_CONFIG_PATH=GangaAtlas/Atlas.ini

Suppose now that we want to run an Athena job (with a montecarlo input dataset), corresponding to the following Ganga-python file configuration named myGangaJob.py, and that this file is located in the run/ directory of our Physics Analysis Package :

# FileName myGangaJob.py #########################

j = Job()
number=str(j.id)
j.name='twiki-Panda-'+number
j.application=Athena()
j.application.option_file=['AnalysisSkeleton_topOptions_AutoConfig.py']
j.application.max_events=-1
j.application.atlas_dbrelease='LATEST'
j.application.prepare(athena_compile=False)
j.splitter=DQ2JobSplitter()
j.splitter.numfiles=1
j.inputdata=DQ2Dataset()
j.inputdata.dataset= ["mc09_7TeV.105200.T1_McAtNlo_Jimmy.merge.AOD.e510_s765_s767_r1302_r1306/"]
#next sentence for small test, comment it for the full analysis:
j.inputdata.number_of_files=1
j.outputdata=DQ2OutputDataset()
j.outputdata.datasetname='user.mynickname.test.ganga.panda.'+number
j.outputdata.location='IFIC-LCG2_SCRATCHDISK'
j.backend=Panda()
j.do_auto_resubmit=True
j.submit()

# End File #########################

The variable 'number' is just to save the id (the ganga identication) of the job for defining a unique output dataset name (a requirement of DQ2). With j.outputdata.location, you can choose the Storage Element for your output datasets according the DDM police. The output dataset is first storaged in the scratchdisk of the site where the subjobs has run ,and then, a Datri request is made to your location. The command j.do_auto_resubmit=True allows an automatic resubmission of the failed subjobs, but this only happens if some 'completed' subjobs existed.

You can see your job status and the stdout file in the Panda Monitor Users (look for your name like in the Grid Certificate).

To make Ganga executing this file do the following :

$> ganga

Now we are inside the Command Line Interpreter (CLI) of Ganga, then we can use the own commands of Ganga. For example, in order to execute our "myGangaJob.py" file we use the execfile() command as follows :

In [1]: execfile('myGangaJob.py')


VI.3.- Some ganga commands for working with grid jobs

Other commands for working with our jobs and monitoring them are:

See all the jobs information:

In [1]: jobs

See the only information from an interval of jobs:

In [1]: jobs.select(1,10)

See the Ganga Status of one job and of a subjob:

In [1]: jobs(10).status

In [1]: jobs(10).subjobs(2).status

See only subjobs with concrete status:

In [1]: jobs(10).subjobs.select(status='failed')

See subjobs number with a concrete status:

In [1]: len(jobs(10).subjobs.select(status='failed'))

See one subjob in the panda monitor:

In [1]: ! firefox $jobs(731).subjobs(4826).backend.url  &

Kill the job:

In [1]: jobs(10).kill()

Remove one job:

In [1]: jobs(10).remove()

See the possible site names for running with Panda backend:

In [1]: r=PandaRequirements()
In [1]: r.list_sites()



VII.- Performing Atlas Distributed Data Analysis on the Grid - Pathena (2015)

The official way to perform ATLAS Data Analysis is using the Ganga or the Pathena tool. However, the Grid Resources can also be used directly by the user. In the following let us discuss how to use Pathena for Distributed Analysis in Atlas on the Grid. For a detailed information on how to use pathena see the How to submit Athena jobs to Panda twiki.


VII.1.- Setting up and configuring the Pathena Environement at IFIC

First, make sure you have a grid certificate. You should have usercert.pem and userkey.pem under ~/.globus.

$> ls ~/.globus/*
usercert.pem userkey.pem
$> export CERN_USER=YourUserName

All you need here is to put usercert.pem and userkey.pem under the globus directory. Especially don't setup the grid runtime, i.e., pathena doesn't require "source .../grid-env.sh". The grid runtime and the Athena runtime are incompatible.

###Here we have set up the cvmfs software and asked to set up the Panda Clients
echo "--- set up ATLAS:"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
echo "--- set up PandaClient:"
localSetupPandaClient --noAthenaCheck 
echo "--- getting a proxy:"
voms-proxy-init -voms atlas
### you should NOT try to setup both ROOT and an ATLAS release in the same session, as it can cause problems

$> RELEASE=17.7.0 
$> asetup --gcc46default ${RELEASE},rel_6,64,runtime,cvmfsonly,here --testarea=$HOME/AthenaTestArea --svnroot=svn+ssh://$CERN_USER@svn.cern.ch/reps/atlasoff --multitest --dbrelease "<latest>"
$> cd AthenaTestArea/17.7.0/PhysicsAnalysis/AnalysisCommon/UserAnalysis/run
$> get_files AnalysisSkeleton_topOptions_AutoConfig.py


VII.2.- Running analysis using Athena with Pathena

Once you have set up your pathena environment, you are ready to run you analysis job (this is a given example)
$> pathena AnalysisSkeleton_topOptions_AutoConfig.py --inDS=mc12_8TeV:mc12_8TeV.220232.MadGraphPythia_AUET2B_CTEQ6L1_pMSSM_EW_105014464.merge.AOD.e3203_a220_a205_r4540/ --outDS=user.${CERN_USER}.test00 --inputFileList=ListInput.txt

INFO : using CMTCONFIG=x86_64-slc6-gcc46-opt
INFO : extracting run configuration
INFO : ConfigExtractor > Input=POOL
INFO : ConfigExtractor > Output=AANT AANTupleStream AANT AnalysisSkeleton.aan.root
INFO : archiving source files
INFO : archiving InstallArea
INFO : checking symbolic links
INFO : uploading source/jobO files
INFO : submit
INFO : succeeded. new jediTaskID=5270895

Job records can be retrived with pbook. In a pbook session, autocomplete is bounded to the TAB key. And use the up-arrow key to bring back the command.

$> pbook
$> show()
$> Ctrl+D (press both to exit)


VII.3.- Some Pathena commands for working with grid job

Some options are explained here.

-v
Verbose. useful for debugging

--inDS
name of input dataset

--outDS
name of output dataset. should be unique and follow the naming convention

--libDS
name of a library dataset

--split
the number of sub-jobs to which an analysis job is split

--nFilesPerJob
the number of files on which each sub-job runs

--nEventsPerJob
the number of events per job

--nFiles
use an limited number of files in the input dataset

--nSkipFiles
the number of files in the input dataset one wants to skip counting from the first file of the dataset

--useAMIAutoConf
evaluates the inDS autoconf information from AMI. Boolean -- just set the option without arguments

--long
send job to a long queue

--blong
send build job to a long queue

--noBuild
skip buildJob

--extFile
pathena exports files with some special extensions (.C, .dat, .py .xml) in the current directory. If you want to add other files, specify their names, e.g., --extFile data1.root,data2.tre

--noSubmit
don't submit jobs

--tmpDir
temporary directory in which an archive file is created

--site
send job to a site, if it is AUTO, jobs will go to the site which holds the most number of input files

-c --command
one-liner, runs before any jobOs

--extOutFile
define extra output files, e.g., output1.txt,output2.dat

--fileList
List of files in the input dataset to be run

--addPoolFC
file names to be inserted into PoolFileCatalog.xml except input files. e.g., MyCalib1.root,MyGeom2.root

--skipScan
Skip LRC/LFC lookup

--inputFileList
name of file which contains a list of files to be run in the input dataset

--removeFileList
name of file which contains a list of files to be removed from the input dataset

--corCheck
Enable a checker to skip corrupted files

--inputType
File type in input dataset which contains multiple file types

--outTarBall
Save a copy of local files which are used as input to the build

--inTarBall
Use a saved copy of local files as input to build

--outRunConfig
Save extracted config information to a local file

--inRunConfig
Use a saved copy of config information to skip config extraction

--minDS
Dataset name for minimum bias stream

--nMin
Number of minimum bias files per one signal file

--cavDS
Dataset name for cavern stream

--nCav
Number of cavern files per one signal file

--trf
run transformation, e.g. --trf "csc_atlfast_trf.py %IN %OUT.AOD.root %OUT.ntuple.root -1 0"

--useNextEvent
Use this option if your jobO uses theApp.nextEvent() e.g. for G4 simulation jobs. Note that this option is not required when you run transformations using --trf.

--notSkipMissing
If input files are not read from SE, they will be skipped by default. This option disables the functionality

--pfnList
Name of file which contains a list of input PFNs. Those files can be un-registered in DDM

--individualOutDS
Create individual output dataset for each data-type. By default, all output files are added to one output dataset

--dbRelease
use non-default DBRelease or CDRelease (DatasetName:FileName). e.g., ddo.000001.Atlas.Ideal.DBRelease.v050101:DBRelease-5.1.1.tar.gz. To run with no dbRelease (e.g. for event generation) provide an empty string ('') as the release.

--dbRunNumber
RunNumber for DBRelease or CDRelease. If this option is used some redundant files are removed to save disk usage when unpacking DBRelease tarball. e.g., 0091890. This option is deprecated and to be used with 2008 data only.

--supStream
suppress some output streams. e.g., ESD,TAG

--ara
obsolete. Please use prun?

--ares
obsolete. Please use prun?



VIII.- Several setups at IFIC (2015)

Available releases

right Look here what releases are available:
ls /afs/cern.ch/atlas/software/releases  ###Athena
ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/
ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/root
ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/python
ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/rucio
ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/Ganga

PanDA Clients (pathena / prun / pbook)

###Here we have set up the cvmfs software and asked to set up the Panda Clients
echo "--- set up ATLAS:"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
echo "--- set up PandaClient:"
localSetupPandaClient --noAthenaCheck 
echo "--- getting a proxy:"
voms-proxy-init -voms atlas
### you should NOT try to setup both ROOT and an ATLAS release in the same session, as it can cause problems

Grid environment + Panda tools

echo "--- set up ATLAS:"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
echo "--- set up Panda tools, DQ2 & PyAMI:"
export CERN_USER=YourUserName
export RUCIO_ACCOUNT=$CERN_USER
localSetupPandaClient --noAthenaCheck
localSetupDQ2Client --skipConfirm
localSetupPyAMI
#export DQ2_LOCAL_SITE_ID=IFIC-LCG2_LOCALGROUPDISK
echo "--- getting a proxy:"
voms-proxy-init -voms atlas

Root

echo "---set up ATLAS:"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
echo "---set up Root:"
export CERN_USER=YourUserName
localSetupROOT --skipConfirm
###use this line if you want a specific Root release
###localSetupROOT 5.34.14-x86_64-slc6-gcc4.7 --skipConfirm

Python

echo "---set up ATLAS:"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
echo "---set up Python:"
export CERN_USER=YourUserName
localSetupPython 2.7.3-x86_64-slc6-gcc47
If you want to use always the same release, you can add these lines in your .bash_profile
export PYTHONDIR=/afs/cern.ch/sw/lcg/external/Python/2.7.3/x86_64-slc6-gcc48-opt
export PYTHONPATH=$PYTHONPATH:$ROOTSYS/lib
export LD_LIBRARY_PATH=$ROOTSYS/lib:$PYTHONDIR/lib:$LD_LIBRARY_PATH:/opt/rh/python27/root/usr/lib64

DQ2

echo "--- set up ATLAS:"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
export CERN_USER=YourUserName
echo "--- set up DQ2:"
export RUCIO_ACCOUNT=$CERN_USER
localSetupDQ2Client --skipConfirm
###look here dq2-sources or export DQ2_LOCAL_SITE_ID=ROAMING
export DQ2_LOCAL_SITE_ID=IFIC-LCG2_LOCALGROUPDISK
echo "--- getting a proxy:"
voms-proxy-init -voms atlas

Rucio

echo "--- set up ATLAS:"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
export CERN_USER=YourUserName
echo "--- set up Rucio:"
export RUCIO_ACCOUNT=$CERN_USER
export RUCIO_AUTH_TYPE=x509_proxy
localSetupRucioClients
###The clients are available at cern and outside cern with cvmfs through asetup:
###asetup 17.1.4,rucio
###source /afs/cern.ch/atlas/offline/external/GRID/ddm/rucio/rucio_testing.sh 
echo "--- getting a proxy:"
voms-proxy-init -voms atlas

These are the paths where your files will be stored (at IFIC)

/lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/rucio/user/YourUserName
/lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/user/YourUserName

Ganga

echo "--- set up ATLAS:"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
export CERN_USER=YourUserName
echo "--- set up Ganga:"
ls /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/Ganga
localSetupGanga
###use this line if you want a specific Ganga release
### localSetupGanga 6.0.43

Athena

echo "--- set up ATLAS:"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
export CERN_USER=YourUserName
echo "--- set up Athena:"
ls /afs/cern.ch/atlas/software/releases/   ###Look here what releases are available
asetup 18.1.2

<br>
echo "--- set up ATLAS: "
echo "look here: ls /afs/cern.ch/atlas/software/releases"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
export CERN_USER=YourUserName
echo "--- set up Athena:"
asetup --gcc48default 20.0.0.2,rel_6,64,runtime,cvmfsonly,here   ### or asetup 20.0.0.2,here
export FRONTIER_SERVER="(serverurl=http://ccfrontier.in2p3.fr:23128/ccin2p3-AtlasFrontier)(serverurl=http://atlasfrontier.cern.ch:8000/atlr)(proxyurl=http://sq5.ific.uv.es:3128)(proxyurl=http://atlsquid.pic.es:3128)" 




IX.- Useful info/macros (2015)

IX.1.- TopRootCore (AnalysisTop)

Go to the !TopRootCoreRelease twiki and see what release you need. In this example we'll use AnalysisTop-1.5.0==TopRootCoreRelease-14-00-24

IX.1.1- First time: setup of AnalysisTop

For sites with access to cvmfs, e.g. lxplus, Birmingham you can do:
mkdir AnalysisTop-1.5.0_ttbarResonances
cd AnalysisTop-1.5.0_ttbarResonances
setupATLAS
rcSetup Top,1.5.0
#rc version | grep TopD3PDBoosted
rc checkout_pkg atlasoff/PhysicsAnalysis/TopPhys/TopD3PDBoosted/tags/TopD3PDBoosted-14-00-22
#rc version | grep TopD3PDCorrection
rc checkout_pkg atlasoff/PhysicsAnalysis/TopPhys/TopD3PDCorrections/tags/TopD3PDCorrections-12-01-44
#rc version | grep JetUncertainties
rc checkout_pkg atlasoff/Reconstruction/Jet/JetUncertainties/tags/JetUncertainties-00-08-16
#rc version | grep ttResoSingleLepton
rc checkout_pkg atlasphys/Physics/Top/PhysAnalysis/ttResoSingleLepton/tags/ttResoSingleLepton-00-00-42
rc find_packages
rc compile
exit

Attention! Check if all the new packages you've download has the same release than above:

rc version | grep TopD3PDBoosted
rc version | grep TopD3PDCorrection
rc version | grep JetUncertainties
rc version | grep ttResoSingleLepton

You might want to make a script called setup.sh in this directory that contains the next nugget. Note that it is subtly different from what is written above. For example, rcSetup only needs to know the release for the first time setup. After that, it can work it out. Also the script turns on panda, dq2, ami. Since they're useful.

#!/bin/bash
echo
echo
echo "========================= AnalysisTop-1.5.0_ttbarResonances ========================="
date +"setups %x %r"
echo
echo
echo "--- set up ATLAS:"
setupATLAS
echo
echo "--- set up Panda, DQ2 & PyAMI:"
export CERN_USER=vsanchez
export RUCIO_ACCOUNT=$CERN_USER
localSetupPandaClient --noAthenaCheck
localSetupDQ2Client --skipConfirm
localSetupPyAMI
###dq2-sources
#export DQ2_LOCAL_SITE_ID=IFIC-LCG2_LOCALGROUPDISK
echo
echo "--- getting a proxy:"
voms-proxy-init -voms atlas
echo
echo "--- set up AnalysisTop-1.5.0_ttbarResonances"
rcSetup #notice I dropped the parameters
echo

IX.1.2- Setup run-time environment (needs to be done for each new shell)

This makes sure all the paths are set correctly and must be done every time.
cd AnalysisTop-1.5.0_ttbarResonances
source setup.sh
That's it, isn't that much better than the old way?

IX.1.3- Make a run/ directory and link to the relevant packages

mkdir run && cd run
ln -s $ROOTCOREBIN/data .
ln -s $ROOTCOREBIN/bin/x86_64-slc6-gcc47-opt bin
cp $ROOTCOREBIN/data/TopD3PDAnalysis/settings.txt .

### our files are stored in the folder: /afs/cern.ch/work/v/vsanchez/public/forDan_AnalysisTop/
cp /afs/cern.ch/work/v/vsanchez/public/file_list/file_list_5files.txt $ROOTCOREBIN/../run

cp /afs/cern.ch/work/v/vsanchez/public/forDan_AnalysisTop/run/settings_noSys.txt .   ###to run without systematics
cp /afs/cern.ch/work/v/vsanchez/public/forDan_AnalysisTop/run/settings_withSys.txt .   ###to run with systematics

cp /afs/cern.ch/work/v/vsanchez/public/forDan_AnalysisTop/TopD3PDBoosted/Root/*cxx $ROOTCOREBIN/../TopD3PDBoosted/Root
cp /afs/cern.ch/work/v/vsanchez/public/forDan_AnalysisTop/TopD3PDBoosted/TopD3PDBoosted/*h $ROOTCOREBIN/../TopD3PDBoosted/TopD3PDBoosted
cp /afs/cern.ch/work/v/vsanchez/public/forDan_AnalysisTop/TopD3PDBoosted/util/*cxx $ROOTCOREBIN/../TopD3PDBoosted/util

cp /afs/cern.ch/work/v/vsanchez/public/forDan_AnalysisTop/TopD3PDCorrections/data/BTagCalibration.env_IFIC $ROOTCOREBIN/../TopD3PDCorrections/data/BTagCalibration.env

cp /afs/cern.ch/work/v/vsanchez/public/forDan_AnalysisTop/JetUncertainties/Root/JESUncertaintyProvider.cxx $ROOTCOREBIN/../JetUncertainties/Root

cd $ROOTCOREBIN/../TopD3PDBoosted/cmt
make -f Makefile.RootCore
cd $ROOTCOREBIN/../TopD3PDCorrections/cmt
make -f Makefile.RootCore
cd $ROOTCOREBIN/../JetUncertainties/cmt
make -f Makefile.RootCore

cd $ROOTCOREBIN/../run

We can disable systematic variations to make the code run faster. Edit settings.txt and uncomment the line #noSyst 1. Note that for TopRootCoreRelease-14-00-23 and earlier, the settings file is located in ../TopD3PDAnalysis/control/.

IX.1.4- Make a file list of what you want to run over (for boosted)

You need to make a text file (file_list.txt) in the run/ directory containing the filenames of the input DPDs. Use a new line for each file.

right Boosted example: Execute the command below inside your run/ and it'll make a test file for you.

cp /afs/cern.ch/work/v/vsanchez/public/file_list/file_list_5files.txt ./file_list.txt
### if your files are stored on eos:   echo root://eosatlas//eos/atlas/FOLDER/FILE.root > file_list.txt

IX.1.5- Run D3PD2MiniSLBoost locally

Inside the run/ directory execute the boosted semi-leptonic mini ntuple making code (!!!pay atention to the name of settings.txt file!!!):
////////////////////////////////////////////////
 -common    -rndmType debug -ttresCutflow.
////////////////////////////////////////////////
D3PD2MiniSLBoost -f file_list.txt -p settings_noSys.txt -mcType fullSim -common -boost -rndmType debug -ttresCutflow -useTruthParticles -saveTruthTopDecay > OUTtruthDebug_noSys_MC.txt
D3PD2MiniSLBoost -f file_list.txt -p settings_withSys.txt -mcType fullSim -common -boost -rndmType debug -ttresCutflow -useTruthParticles -saveTruthTopDecay > OUTtruthdebug_Sys_MC.txt
D3PD2MiniSLBoost -f file_list_dataEgamma.txt -p settings_noSys.txt -dataStream Egamma -common -boost -rndmType debug -ttresCutflow > OUTdebug_dataEgamma.txt
D3PD2MiniSLBoost -f file_list_dataMuons.txt -p settings_noSys.txt -dataStream Muons -common -boost -rndmType debug -ttresCutflow > OUTdebug_dataMuons.txt

////////////////////////////////////////////////
 -common 
////////////////////////////////////////////////
D3PD2MiniSLBoost -f file_list.txt -p settings_noSys.txt -mcType fullSim -common -boost -useTruthParticles -saveTruthTopDecay > OUT_noSys.txt
D3PD2MiniSLBoost -f file_list.txt -p settings_noSys.txt -mcType atlfast2 -common -boost -useTruthParticles -saveTruthTopDecay > OUT_noSys_atlfast2.txt

////////////////////////////////////////////////
-common -sysOn
////////////////////////////////////////////////
D3PD2MiniSLBoost -f file_list.txt -p settings_withSys.txt -mcType fullSim -common -boost -useTruthParticles -saveTruthTopDecay -sysOn > OUT_withSys.txt
D3PD2MiniSLBoost -f file_list.txt -p settings_withSys.txt -mcType atlfast2 -common -boost -useTruthParticles -saveTruthTopDecay -sysOn > OUT_withSys_atlfast2.txt

////////////////////////////////////////////////
 -common -dataStream
////////////////////////////////////////////////
D3PD2MiniSLBoost -f file_list_dataEgamma.txt -p settings_noSys.txt -dataStream Egamma -common -boost > OUT_dataEgamma.txt
D3PD2MiniSLBoost -f file_list_dataMuons.txt -p settings_noSys.txt -dataStream Muons -common -boost > OUT_dataMuons.txt

IX.1.6- Running TRC on the GRID

This is an example to run TRC on the grid:
#!/bin/bash

echo
echo "---------------------------- beginning ----------------------------"
date +"starting at %x %r"
echo

###==========================================
export CERN_USER=YourCernUser
echo $CERN_USER

WantToRun="ttbar"  
VERSION=v00 

DESTINATION=IFIC-LCG2_LOCALGROUPDISK ###--destSE=$DESTINATION 
###==========================================


FOLDER=ListOfFiles ### This folder is in TopRootCore/run/runGRID/ListOfFiles


if [ "$WantToRun" = "ttbar" ]; then 
    #mcTYPE="fullSim"
    #LIST="$FOLDER/listMC_ttbar_fullSim.txt" 
    mcTYPE="atlfast2"
    LIST="$FOLDER/listMC_SM_atlfast2_Powheg1727.txt"
    echo
fi


if [ "$WantToRun" = "dataEL" ]; then
    LIST="$FOLDER/listDATA_Egamma.txt"
   echo
fi   


if [ "$WantToRun" = "dataMU" ]; then
    LIST="$FOLDER/listDATA_Muons.txt"
   echo
fi



#####################################
echo

for dataset in `cat $LIST`;
do

    IDnumber=`echo $dataset | cut -d '.' -f2` 
    Description=`echo $dataset | cut -d '.' -f3`
    WrongTags=`echo $dataset | cut -d '.' -f6` 
    Tags=`echo $WrongTags | cut -d '/' -f1`
    OUTdataset=user.${CERN_USER}.${IDnumber}.${Description}.boosted.${Tags}._$VERSION/

    InputToTxt="InputToTxt.txt"


    if [ "$WantToRun" = "ttbar" ]; then   ###modelling
        OUTPUTS="el.root,mu.root"
        EXECUT="D3PD2MiniSLBoost -f $InputToTxt -p settings_noSys.txt -boost -common -mcType $mcTYPE -useTruthParticles"
    fi
    
    
    if [ "$WantToRun" = "dataEL" ]; then
        OUTPUTS="el.root"
        EXECUT="D3PD2MiniSLBoost -f $InputToTxt -p settings_noSys.txt -boost -common -dataStream Egamma"
    fi   
    
    if [ "$WantToRun" = "dataMU" ]; then
        OUTPUTS="mu.root"
        EXECUT="D3PD2MiniSLBoost -f $InputToTxt -p settings_noSys.txt -boost -common -dataStream Muons"
    fi


    echo
    echo
    echo
    echo
    echo "************ we're running: $WantToRun *************"
    echo " input dataset = $dataset"
    echo "output dataset = $OUTdataset"
    echo


    rootVERSION=5.34.14 ###--rootVer=$rootVERSION
    CMTconfig="x86_64-slc6-gcc47-opt"
    FILESperJOB=5 
    EXCLUDEsites="ANALY_NSC,ANALY_wuppertalprod" 


    echo "========= running $WantToRun with prun ========="
    
    prun --inDS=$dataset --useRootCore --outDS=$OUTdataset --outputs=$OUTPUTS --forceStaged --nFilesPerJob=$FILESperJOB --writeInputToTxt=IN:$InputToTxt --rootVer=$rootVERSION --cmtConfig=$CMTconfig --exec="ln -s \$ROOTCOREBIN/data .; ln -s \$ROOTCOREBIN/bin/x86_64-slc6-gcc47-opt bin; echo >> $InputToTxt; $EXECUT" --useShortLivedReplicas --excludedSite=$EXCLUDEsites #--destSE=$DESTINATION --noSubmit
  
    
done
#####################################

echo
date +"ending at %x %r"
echo "---------------------------- END ----------------------------"
echo



IX.2.- Running transformation with pathena

EventIndexProducer:
pathena --trf "POOLtoEI_tf.py --inputPOOLFile %IN --outputEIFile %OUT.ei.pkl" --inDS data12_8TeV.00209381.physics_Muons.merge.AOD.f473_m1218 --outDS
user.sanchezj.test.20141216.1 --nFilesPerJob 1 --site=ANALY_IFIC --nFiles 1 --nSkipFiles=10 


IX.3.- Running transformation with prun

EventIndexProducer:
prun --exec "source script.sh %IN" --inDS data11_7TeV.periodK.physics_Muons.PhysCont.AOD.repro09_v01/ --outDS user.sanchezj.test.20140509.v01  --nFilesPerJob 1 --useAthenaPackages --forceStaged 

Where script.sh is:

    POOLtoEI_tf.py --trigger "true" --inputPOOLFile $1 --outputEIFile data.ei.pkl
    echo "=================================="
    unpickEI.py data.ei.pkl
    echo "=================================="
    sendEI.py -v --trigger data.ei.pkl 


IX.4.- Retrieving your data stored at IFIC

If you ask for a replica of your output-datasets (if destination site is IFIC-LCG2_LOCALGROUPDISK), these will be stored here:
/lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/rucio/user/YourUserName

If you want to access all your files, you can use this script called links_rucio.py (first of all you need to set up your grid environment and put "export DQ2_LOCAL_SITE_ID = IFIC-LCG2_LOCALGROUPDISK"). This script will generate a new folder containing lots of symbolink links pointing to your files:

#Creating symlinks for files stored with rucio directions
#Ready for IFIC SE: srm://srmv2.ific.uv.es/lustre/.....

#Execution: python links_rucio.py NEWDIRECTORY DATASET

# sed ':a;N;$!ba;s/\n/ /g' allELfiles.txt > viki
import commands
import os
import sys

#get the arguments
newdirectory=sys.argv[1]
dataset=sys.argv[2]
####

#Check if the directory exists, create it and go to in
if not os.path.exists(newdirectory):
        os.makedirs(newdirectory)
os.chdir(newdirectory)

#change the word after grep according to your site Storage Element
output=commands.getoutput("dq2-ls -fp "+dataset+" | grep srm")
files=output.split("\n")

for i in files:
    #print "File: "+i
    divisions=i.split('/')
    #print divisions[-1]
    ##change the interval in i[] according to your site Storage Element
    print i[22:]
    os.symlink(i[22:],divisions[-1])

#come back to the initial directory
os.chdir('..')
print "The End"




X.- Monitoring ES-ATLAS-T2 (IFIC-LCG2, IFAE & UAM-LCG2) and the Iberian Cloud (2015)

In this section you can find useful links to monitor different Tier-1 and Tier-2.

right Spanish Tier-2 (ES-ATLAS-T2): IFIC-LCG2 + IFAE + UAM-LCG2

right Iberian Cloud (Spain & Portugal): IFAE + IFIC-LCG2 + PIC + UAM-LCG2 + LIP-COIMBRA + LIP-LISBON + NCG_INGRID_PT

right Dashboard:

right Availability and Reliability graphs :

right ANALYSIS JOBS :


right PRODUCTION JOBS :


right ANALYSIS+PRODUCTION JOBS :


right Transfer Volume: (choose: format MMM yyyy)
ATLAS Dashboard - ATLAS DDM DASHBOARD 2.0 M5

Source: (empty=ATLAS)
Destinacions: IFAE IFIC-LCG2 LIP-COIMBRA LIP-LISBON NCG-INGRID-PT PIC UAM-LCG2


right Transfer Throughput (velocidad de descarga) : (select TYPE=Rate)
Source: (empty=ATLAS) bueno!!!
Destinacions: IFAE IFIC-LCG2 LIP-COIMBRA LIP-LISBON NCG-INGRID-PT PIC UAM-LCG2

Source: IFAE IFIC-LCG2 LIP-COIMBRA LIP-LISBON NCG-INGRID-PT PIC UAM-LCG2
Destinacions: (empty=ATLAS)




target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target

led-box-blue IFICled-box-blue
Edificio Institutos de Investigación
Paterna, apartado 22085
E-46071 Valencia (Spain)

target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target

target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target

target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target.target

-- MohammedKaci? - ElenaOliverGarcia - MiguelVillaplana - 09 Sep 2011 - VictoriaSanchezMartinez - February2015

Edit | WYSIWYG | Attach | PDF | Raw View | Backlinks: Web, All Webs | History: r58 < r57 < r56 < r55 < r54 | More topic actions
 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback