r4 - 16 May 2019 - 14:53:31 - JavierSanchezYou are here: TWiki >  Informatica Web  >  ScientificComputing > PhenoFarm

Pheno Farm


The pheno farm is comprised by:

  • 1 User interface (phenoui.ific.uv.es) with:
    • 2 x AMD EPYC 7351 16-Core Processor
    • 128 MB RAM DDR4 @ 2667 MHz
    • 1 HD SSD SATA 240 GB (OS)
    • 1 HD 1TB SATA 7200 rpm
  • 6 Worker Nodes
    • 2 x AMD Ryzen Threadripper 1950X 16-Core Processor
    • 128 MB RAM DDR4 @ 2400 MHz
    • 1 HD 1TB SATA 7200 rpm

User access

Users can login to this cluster using ssh to the User Interface machine phenoui.ific.uv.es with their IFIC kerberos username and password. To be able to login into this system a proper registration is needed. Please, contact the Project Leader to get the access rights.


File system

User files reside on the Lustre file system which is globally accessible by all the nodes. Home path is /lhome/ific/initial_letter/username, for example /lhome/ific/a/ax2512. The default quota is 100 Gbytes and 200,000 inodes. The inode is a filesystem data structure to store file and directory information, so inodes are spent every time you create files and directories.

Extra storage at the User Interface can be used under /l/uname, but it can not be accessed from the worker nodes.

Files system is not backed up at this moment, so be careful when you operate on files.

Job management system

HTCondor is the resource management system that runs in this cluster. It manages the job workflow and allows the users to send jobs to be executed in the worker nodes. Direct access to worker nodes is not allowed.

Each worker node has a partitionable slot that accepts jobs to be processed. HTCondor deals with job sorting and processing. Slots are divided when the job does not require all node resources, so more jobs can be run in the node. CPU and Memory resources are subtracted in chunks from the main slot.

slot #cpus (cores) Mem (MB) chuck reservation
slot1@phenoui.ific.uv.es 64 64322 8 cores, 2048 MB
slot1@pheno01.ific.uv.es 32 32117 8 cores, 2048 MB
slot1@pheno02.ific.uv.es 32 32117 8 cores, 2048 MB
slot1@pheno03.ific.uv.es 32 32117 8 cores, 2048 MB
slot1@pheno04.ific.uv.es 32 32117 8 cores, 2048 MB
slot1@pheno05.ific.uv.es 32 32117 8 cores, 2048 MB
slot1@pheno06.ific.uv.es 32 32117 8 cores, 2048 MB

For the time being, there is no time limit for the jobs in this cluster

HTCondor tries to run jobs form different users in a fair share way. Jobs priorities among users take into account the previous time spent by the user so CPU time is assigned evenly between all users.

A quick usage reference for HTCondor can be found here

-- Last update: 12 Mar 2019
Edit | WYSIWYG | Attach | PDF | Raw View | Backlinks: Web, All Webs | History: r4 < r3 < r2 < r1 | More topic actions
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback