1. Introduction

1.1. Purpose of this document

This document introduces users to the U.S. Army Research Laboratory (ARL) DoD Supercomputing Resource Center (DSRC). It provides an overview of available resources, links to important documentation, important policies governing the use of our systems, and other information to help you make efficient and effective use of your allocated hours.

1.2. About the ARL DSRC

The ARL DSRC is one of five DSRCs managed by the DoD High Performance Computing Modernization Program (HPCMP). The DSRCs deliver a range of compute-intensive and data-intensive capabilities to the DoD science and technology, test and evaluation, and acquisition engineering communities. Each DSRC operates and maintains major High Performance Computing (HPC) systems and associated infrastructure, such as data storage, in both unclassified and classified environments. The HPCMP provides user support through a centralized help desk and data analysis/visualization group.

The ARL DSRC is a supercomputing and computational science facility that supports a broad and diverse user base in the DoD research, development, test, and evaluation (RDT&E) communities. The Center is located at Aberdeen Proving Ground, Maryland, and is organizationally aligned under the Combat Capabilities Development Command (CCDC), U.S Army Research Laboratory, Computational and Information Sciences Directorate (CISD). The mission of the CCDC ARL DSRC is to provide world-class high performance computing, advanced networking, and computational science tools and expertise in support of the DoD Research, Development, Test and Evaluation (RDT&E) communities.

1.3. Who our services are for

The HPCMP's services are available to Service and Agency researchers in the Research, Development, Test, and Evaluation (RDT&E) and acquisition engineering communities of the DoD and its respective Services and Agencies, DoD contractors, and University staff working on a DoD research grant.

For more details, see HPCMP Presentation " Who may run on HPCMP Resources?"

1.4. How to get an account

Anyone meeting the above criteria may request an HPCMP account. A Help Desk video is available to guide you through the process of getting an account. To begin the account application process, visit HPC Centers: Obtaining an Account, and follow the instructions presented there.

Once you have an active pIE User Account, visit the ARL accounts page for instructions on how to request accounts on the ARL DSRC HPC systems. If you need assistance with any part of this process, please contact the HPC Help Desk at accounts@helpdesk.hpc.mil.

1.5. Visiting the ARL DSRC

If you need to travel to the ARL DSRC, there are security procedures that must be completed BEFORE planning your trip. Please visit our Planning a Visit page and coordinate with your Service/Agency Approval Authority (S/AAA) to ensure that all of your credentials are in place and all visit requirements are met.

2. Policies

2.1. Baseline Configuration (BC) policies

The Baseline Configuration Team sets policies that apply to all HPCMP HPC systems. The BC Policy Compliance Matrix provides an index of all BC policies and compliance status of systems at each DSRC.

2.2. Login node abuse policy

The lifespan of interactive shell usage of each of the ARL DSRC systems is restricted to 15 minutes of CPU time per processor. Any interactive job/process that exceeds this 15-minute CPU time limit will automatically be killed by system monitoring software. Interactive usage should be limited to items such as: program development, including debugging and performance improvement, job preparation, job submission, and the preprocessing and post-processing of data.

2.3. File space management policy

2.3.1. INODE limits

The ARL DSRC implements INODE (number of files) limits on the $WORKDIR file systems on a per-user basis.

Limits are:
soft 50,000,000 – A warning email is sent
hard 100,000,000 – Write permission denied

All users have this setting. With justified reasoning, these limits can be adjusted. Contact the ARL DSRC Help Desk for assistance with this issue.

2.3.2. System scrubber

The scratch file system, /p/work1 or /work, should be used for active temporary data storage and batch processing. A system "scrubber" will monitor utilization of the scratch space and files not accessed within 21 days on the scratch file system are subject to removal, but may remain longer if the space permits. There are no exceptions to this policy. Customers who wish to keep files for long-term storage should copy files selected for retention back into their /home or /archive directories to avoid data loss by the "scrubber." Customers are responsible for archiving files from the scratch file systems. This file system is considered volatile working storage and no automated backups will be performed.

Note: Please do not use /tmp or /var/tmp for temporary storage!

2.3.4. Data transfers between systems or to or from the Center

A data transfer can be requested by completing the Data Transfer Request Form and sending it to the ARL DSRC Helpdesk. Contact the ARL HPC Help Desk to request the form be sent to you.

2.4. Maximum session lifetime policy

To provide users with a more secure high performance computing environment, the ARL DSRC has implemented a limit on the lifetime of all terminal/window sessions. Any idle terminal or window session connections to the ARL DSRC shall be terminated after 10 hours. Regardless of activity, any terminal or window session connections to the ARL DSRC shall be terminated after 20 hours. A 15-minute warning message shall be sent to each such session prior to its termination.

2.5. Batch use policy

The primary resource to schedule jobs on all systems is the node. Users request a certain number of nodes for a certain length of time. Limits on the number of nodes and length of a job vary by system and queue.

In the case where a system has nodes with more memory (large-memory nodes) than other nodes, the scheduler will place jobs requiring more memory than the default memory per processor on the large-memory nodes.

Although every attempt will be made to keep entire systems available, interrupts will occur, and more frequently on nodes with larger numbers of processors. Users should use mechanisms to save the state of their jobs where available (most ARL DSRC-supported applications can create restart files so that runs do not have to start from the beginning) to protect against system interrupts. Users running long jobs without saving the state of their jobs run at-risk with respect to system interrupts. Use of system-level check pointing is not recommended.

All HPC systems have identical queue names: urgent, debug, HIE, high, frontier, standard, transfer and background; however, each queue has different properties as specified in the table below. Each of these queues is assigned a priority factor within the batch system. The relative priorities of the queues are shown in the table below. Jobs in queues other than background will accrue additional priority based on time in queue. The scheduling of jobs uses job slot-reservation based on these priority factors, and increases system utilization via backfilling while waiting for resources to become available.

Queue Descriptions and Limits on ARL DSRC Systems
Priority Queue
Name
Max Wall
Clock Time
Max Cores
Per Job
Comments
Highest transfer 48 Hours 1 Data transfer for user jobs
Down arrow for decreasing priority urgent 96 Hours N/A HPCMP Urgent projects
debug 1 Hour 480 User testing
high 96 Hours N/A HPCMP High-priority projects
frontier 168 Hours N/A HPCMP Frontier projects
cots 96 Hours N/A Abaqus and Fluent jobs
interactive 12 Hours N/A Interactive jobs
standard 168 Hours N/A Normal user jobs
standard-long 200 Hours N/A ARL DSRC permission required
Lowest background 24 Hours N/A Unrestricted access - no allocation charge

In conjunction with the HPCMP Baseline Configuration policy for Common Queue Names across the allocated centers, the ARL DSRC will honor batch jobs that include the queue name for urgent, high (high-priority) and frontier.

Any project with an allocation may submit jobs to the background queue. Projects that have exhausted their allocations will only be able to submit jobs to the background queue.

2.6. Special request policy

All special requests for allocated HPC resources, including increased priority within queues, increased queue parameters for maximum number of cores and Wall Time, and dedicated use, should be directed to the HPC Help Desk. Request approval will require documentation of the requirement and associated justification, verification by the ARL DSRC support staff, and approval from the designated authority, as shown in the following table. The ARL DSRC Director may permit special requests for HPC resources independent of this model for exceptional circumstances.

Approval Authorities for Special Resource Requests
Resource Request Approval Authority
Up to 10% of an HPC system/complex for 1 week or less ARL DSRC Director or Designee
Up to 20% of an HPC system/complex for 1 week or less S/AAA
Up to 30% of an HPC system/complex for 2 weeks or less Army/Navy/AF Service Principal on HPC Advisory Panel
Up to 100% of an HPC system/complex for greater than 2 weeks HPCMP Program Director or Designee

Contact Information
If you have any questions concerning this policy, please contact the HPC Help Desk at 1-877-222-2039 or via email at help@helpdesk.hpc.mil.

2.7. Account removal policy

This policy covers the disposition or removal of user data when the user is no longer eligible for a given HPCMP account on any one or more systems in the HPCMP inventory.

At the time a user becomes ineligible for an HPCMP user account, the user's access to that account will be disabled.

The user and the Principal Investigator (PI) are responsible for arranging for the disposition of the data prior to account deactivation. The user may request special assistance or specific exemptions or extensions, based on such criteria as availability of resources, technical difficulties or other special needs. If the user does not request any assistance, then the respective center will promptly contact the user, the PI of the project, and the responsible S/AAA to determine the proposed disposition of the user's data. All data disposition actions will be performed as specified in the HPCMP's Data Protection Policy. If the center is unable to reach the aforementioned individuals, or if the contacted person(s) does not respond before the account is deactivated, the user's data stored on systems or home directories will be moved to archive storage, and one of the following two cases must hold:

  1. User has an account at another HPCMP center. Then, the user, the PI of the project or responsible S/AAA, as appropriate, has one year to arrange to move the data from the archive to the HPCMP Center where they have an active account. After this time period has expired, the center may delete the user's data.
  2. User does not have an account at another HPCMP center. Then, the user, the PI of the project, or responsible S/AAA, as appropriate, has one year to arrange to retrieve the data from the HPCMP resources. After this time period has expired, the center may delete the user's data.

Following the disposition of the user's data, the user account will be removed from the system.

In special cases such as but not limited to, security incidents or HPCMP resource abuse, access to a user account and/or user data may be immediately prohibited or deleted as appropriate for the circumstances as judged by the center or HPCMP.

Please note the following. Exceptions to this general data disposition policy can and will be made as necessary within the ability of the center to fulfill such requests, given reasonable justification as judged by the center. Also, contracts requiring data maintenance beyond the conditions of the data disposition policy cannot be accommodated by the center if the center is not a signatory to the contract. Such contracts may be considered when exceptions are requested.

If you have any questions concerning this policy, please contact the HPC Help Desk at 1-877-222-2039 or via email at help@helpdesk.hpc.mil.

2.8. Communications policy

The ARL DSRC Help Desk Team will communicate with users via e-mail, pertinent system messages as to unplanned and planned outages, performance degradation, and network issues. The Team will also communicate user job run errors that may be causing operational issues with the system.

It is vital to the ARL DSRC's communication process, and mutually beneficial to our users, to understand the responsibilities of being a good citizen of the ARL DSRC. We ask that users:

  • Please keep the ARL DSRC apprised of current email addresses. This way we can assure that vital information about our Center reaches you. Please contact your S/AAA to have your email address updated. Please note that if the email address you give us is behind a firewall, you may need to arrange for your local system administrator to allow email from the ARL DSRC to pass through the firewall boundary to your work site.
  • Please check the Centers website: https://centers.hpc.mil, which has up to date current news and information on topics such as HPC resource availability, upcoming training opportunities, or updates to our user guides and the policies and procedures documentation.

2.9. System availability policy

A system will be declared down and made unavailable to users whenever a chronic and/or catastrophic hardware and/or software malfunction or an abnormal computer environment condition exists which could:

  1. Result in corruption of user data.
  2. Result in unpredictable and/or inaccurate runtime results.
  3. Result in a violation of the integrity of the DSRC user environment.
  4. Result in damage to the High Performance Computer System(s).

The integrity of the user environment is considered corrupt anytime a user must modify his/her normal operation while logged into the DSRC. Examples of malfunctions are:

  1. User home ($HOME) directory not available.
  2. User Workspace ($WORKDIR) area not available.
  3. If the archive system is unavailable, queues are suspended, but logins are enabled.

When a system is declared down, based on a system administrator's and/or computer operator's judgment, users will be prevented from using the affected system(s) and all existing batch jobs will be prevented from running. Batch jobs held during a "down state" will be run only after the system environment returns to a normal state.

Whenever there is a problem on one of the HPC systems that could be remedied by removing a part of the system from production (an activity called draining), it must first be determined how much of the system will be impacted by the draining in order to brief the necessary levels of management and the user community.

Where the architecture of the HPC system will allow a node to be removed from production with minimal impact to the system as a whole, then the system administrators can make the decision to remove the node with notification to the operators for information.

Where the architecture of the HPC system will allow significant portions of the system to be removed from production and still allow user production on a large part of the system to continue, then the system administrator along with government and contractor management can make the decision to remove that part of the system. The system should show that domain as out of the normal queue for scheduling jobs so that the user community can determine current status. The system administrator will advise operations, the ARL Help Desk, and the HPC Help Desk of this action.

In cases where $WORKDIR will be unavailable, or a complete system needs to be drained for maintenance, contractor and government director level management will be notified. In cases involving an entire system, the HPC Help Desk will email users of the downtime schedule and the schedule for returning the system to production.

If you have any questions concerning this policy, please contact the HPC Help Desk at 1-877-222-2039 or via email at help@helpdesk.hpc.mil.

2.10. Data import and export policy

2.10.1. Network file transfers

The preferred file transfer method is over the network using the encrypted (Kerberos) file transfer programs rcp, scp, ftp or mpscp. In cases of large numbers of files (> 1000) and/or large amounts of data (> 100 GB), the transfer must use the Scalable Copy Accelerated by MPI (SCAMPI) utility. For information on using SCAMPI, see the SCAMPI User Guide. Users can also contact the HPC Help Desk for assistance in the process. Depending on the nature of the transfer, transfer time may be improved by reordering the data retrieval from tapes, taking advantage of available bandwidth to/from the Center, or dividing the transfer into smaller parts; the ARL DSRC staff will assist the users to the extent that they are able. A physical media transfer may also be an option. Limitations such as available resources and network problems outside the Center can be expected, and the user should allow sufficient time to do the transfers.

2.10.2. Reading/Writing media

Physical media data transfers can be performed by ARL DSRC staff. A data transfer request form is required to be submitted through the ARL DSRC Help Desk and approved. Outbound transfer media other than optical media requires FIPS drive or unused media to be provided. Questions or inquiries should be sent to data-transfer@arl.hpc.mil.

2.11. Account sharing policy

Users are responsible for all passwords, accounts, YubiKeys, and associated PINs issued to them. Users are not to share their passwords, accounts, YubiKeys, or PINs with any other individual for any reason. Doing so is a violation of the contract that users are required to sign in order to obtain access to DoD High Performance Computing Modernization Program (HPCMP) computational resources.

Upon discovery/notification of a violation of the above policy, the following actions will be taken:

  1. The account (i.e., username) will be disabled. No further logins will be permitted.
  2. All account assets will be frozen. File and directory permissions will be set such that no other users can access the account assets.
  3. Any executing jobs will be permitted to complete; however, any jobs residing in input queues will be deleted.
  4. The Service/Agency Approval Authority (S/AAA) who authorized the account will be notified of the policy violation and the actions taken.

Upon the first occurrence of a violation of the above policy, the S/AAA has the authority to request that the account be re-enabled. Upon the occurrence of a second or subsequent violation of the above policy, the account will only be re-enabled if the user's supervisory chain of command, S/AAA, and the High Performance Computing Modernization Office (HPCMO) all agree that the account should be re-enabled.

The disposition of account assets will be determined by the S/AAA. The S/AAA can:

  1. Request that account assets be transferred to another account.
  2. Request that account assets be returned to the user.
  3. Request that account assets be deleted and the account closed.

If there are associate investigators who need access to ARL DSRC computer resources, we encourage them to apply for an account. Separate account holders may access common project data as authorized by the project leader.

3. Available resources

3.1. HPC systems

The ARL DSRC unclassified HPC systems are accessible through the Defense Research and Engineering Network (DREN) to all active customers. Our current HPC systems include:

SCOUT is an IBM Power 9 system. It contains 22 nodes for machine learning training workloads, each with two IBM Power 9 processors, 512 GB of system memory, 6 NVIDIA V100 GPU processing units with 32 GB of high-bandwidth memory each and 15 TB of local solid-state storage. Scout also has 128 GPGPU-accelerated nodes for inferencing workloads, each with two IBM Power 9 processors, 4 NVIDIA T4 GPU’S, 256 GB of system memory, and 4 TB of local solid-state storage. There are also 2 visualization nodes with two IBM Power 9 processors, 512 GB of system memory, 2 NVIDIA V100 GPU Processing units, and 4 TB of local solid-state storage. For more information about SCOUT, visit our hardware page.

For information on restricted systems, see the Restricted Systems page (PKI required).

3.2. Data storage

3.2.1. File systems

Each HPC system has several file systems available for storing user data. Your personal directories on these file systems are commonly referenced via the $HOME, $WORKDIR, $CENTER, and $ARCHIVE_HOME environment variables. Other file systems may be available as well.

File System Environment Variables
Environment Variable Description
$HOME Your home directory on the system
$WORKDIR Your temporary work directory on a high-capacity, high-speed scratch file system used by running jobs
$CENTER Your short-term (120-day) storage directory on the Center-Wide File System (CWFS) for 100 TB
$ARCHIVE_HOME Your archival directory on the archive server

For details about the specific file systems on each system, see the system user guides on the documentation page.

3.2.2. Archive system

All of our HPC systems have access to an online archival system, which provides long term storage for users' files on a petascale robotic tape library system. A 573-GB disk cache frontends the unclassified tape file system and temporarily holds files while they are being transferred to or from tape.

For information on using the archive server, see the Archive User Guide.

For information on using the restricted archive server, please see the Restricted Systems page (PKI required).

3.3. Computing environment

To ensure a consistent computing environment and user experience on all HPCMP HPC systems, all systems follow a standard configuration baseline. For more information on the policies defining the baseline configuration, see the Baseline Configuration Compliance Matrix. All systems run variants of the Linux operating system, but the computing environment varies by vendor and architecture due to vendor-specific enhancements.

3.3.1. Software

Each HPC system hosts a large variety of compiler environments, math libraries, programming tools, and third-party analysis applications which are available via loadable software modules. A list of software is available on the software page, or for more up-to-date software information, use the module commands on the HPC systems. Specific details of the computing environment on each HPC system are discussed in the system user guides, available on the documentation page.

To request additional software or to request access to restricted software, please contact the HPC Help Desk at help@helpdesk.hpc.mil.

3.3.2. Bring your own code

While all HPCMP HPC systems offer a diversity of open source, commercial and government software, there are times when we don't support the application codes and tools needed for specific projects. The following information describes a convenient way to utilize your own software on our systems.

Our HPC systems provide you with adequate file space to store your codes. Data stored in your home directory ($HOME) will be backed up on a periodic basis. If you need more home directory space, you may submit a request to the HPC Help Desk at help@helpdesk.hpc.mil. For more details on home directories, see to the Baseline Configuration (BC) policy FY12-01 (Minimum Home Directory Size and Backup Schedule).

If you need to share an application among multiple users, BC policy FY10-07 (Common Location to Maintain Codes) explains how to create a common location on the $PROJECTS_HOME file system, to place applications and codes without using home directories or scrubbed scratch space. To request a new "project directory," please provide the following information to the HPC Help Desk:

  • Desired DSRC system where a project directory is being requested.
  • POC Information: Name of the sponsor of the project directory, user name, and contact information.
  • Short Description of Project: Short summary of the project describing the need for a project directory.
  • Desired Directory Name: This will be the name of the directory created under $PROJECTS_HOME.
  • Is the code/data in the project directory restricted (e.g. ITAR, etc.)?
  • Desired Directory Owner: The user name to be assigned ownership of the directory.
  • Desired Directory Group: The group name to be assigned to the directory.
    (New group names must be 8 characters or less)
  • Additional users to be added to the group.

If the POC for the project directory ceases being an account holder on the system, project directories will be handled according to the user data retention policies of the center.

Once the project directory is created, you can install software (custom or open source) in this directory. Then, depending on requirements, you can set file and/or directory permissions to allow any combination of group read, write, and execute privileges. Since this directory is fully owned by the POC, he or she can even make use of different groups within subdirectories to provide finer granularity of permissions.

Users are expected to ensure that any software or data that is placed on HPCMP systems is protected according to any external restrictions on the data. Users are also responsible for ensuring no unauthorized or malicious software is introduced to the HPCMP environment.

For installations involving restricted software, it is your responsibility to set up group permissions on the directories and to protect the data. It is crucially important to note that there are users on the HPCMP systems who are not authorized to access restricted data. You may not run servers or use software that communicates to a remote system without prior authorization.

If you need help porting or installing your code, the HPC Help Desk provides a "Code Assist" team that specializes in helping users with installation and configuration issues for user supplied codes. To get help, simply contact the HPC Help Desk and open a ticket.

Please contact the HPC Help Desk help@helpdesk.hpc.mil to discuss any special requirements.

3.3.3. Batch schedulers

Our HPC systems use various batch schedulers to manage user jobs and system resources. Basic instructions and examples for using the scheduler on each system can be found in the system user guides. More extensive information can be found in the Scheduler Guides. These documents are available on the documentation page.

Schedulers place user jobs into different queues based on the project associated with the user account. Most users only have access to the debug, standard, transfer, HIE, and background queues, but other queues may be available to you depending on your project. For more information about the queues on a system, see the Scheduler Guides.

3.3.4. Advance Reservation Service (ARS)

Another way to schedule jobs is through the Advance Reservation Service. This service allows users to reserve resources for use at specific times and for specific durations. The ARS works in tandem with the batch scheduler to ensure that your job runs at the scheduled time, and that all required resources (i.e., nodes, licenses, etc.) are available when your job begins. For information on using the ARS, see the ARS User Guide.

3.4. HPC Portal

The HPC Portal provides a suite of custom web applications, allowing you to access a command line, manage files, and submit and manage jobs from a browser. It also supports pre/post-processing and data visualization by making DSRC-hosted desktop applications accessible over the web. For more information about the HPC Portal, see the HPC Portal page on the HPC Centers website.

3.5. Secure Remote Desktop (SRD)

The Secure Remote Desktop enables users to launch a gnome desktop on an HPC system via a downloadable Java interface client. This desktop is then piped to the user's local workstation (Linux, Mac, or Windows) for display. Once the desktop is launched, a user may run any software application installed on the HPC system. For information on using SRD, or to download the client, see the Secure Remote Desktop page on the DAAC website.

3.6. Network connectivity

The ARL DSRC is a primary node on the Defense Research and Engineering Network (DREN), which provides up to 40-Gb/sec service to DoD HPCMP centers nationwide across a 100-Gb/sec backbone. We connect to the DREN via a 10-Gb/sec circuit linking us to the DREN backbone.

The DSRC's local network consists of a 40-Gb/sec fault-tolerant backbone with 10-Gb/sec connections to the HPC and archive systems.

4. How to access our systems

The HPCMP uses a network authentication protocol called Kerberos to authenticate user access to our HPC systems. Before you can login, you must download and install an HPCMP Kerberos client kit on your local system. For information about downloading and using these kits, visit HPC Centers: Kerberos & Authentication, and click on the tab for your platform. There you will find instructions for downloading and installing the kit, getting a ticket, and logging in.

After installing and configuring a Kerberos client kit, you can access our HPC systems via standard Kerberized commands, such as ssh. File transfers between local and remote systems can be accomplished via the scp, mpscp, or scampi commands. For additional information on using the Kerberos tools, see the Kerberos User Guide or review the tutorial video on Logging into an HPC System. Instructions for logging into each system can be found in the system user guides on the documentation page.

Another way to access the HPC systems is through the HPC Portal. For information on using the portal, visit HPC Centers: HPC Portal. You may also wish to review the HPC Portal demonstration videos. To log into the portal, click on the link for the center where your account is located.

For information on accessing restricted systems, see the system user guides on the Restricted Systems page (PKI required).

5. How to get help

For almost any issue, the first place you should turn for help is the HPC Help Desk. You can email the Help Desk at help@helpdesk.hpc.mil. You can also contact the Help Desk via phone, fax, DSN, or even traditional mail. Full contact for the Help Desk is available on HPC Centers: Technical and Customer Support. The Help Desk can assist with a wide array of technical issues related to your account and your use of our systems. The Help Desk can also assist in connecting you with various special-purpose groups to address your particular need.

5.1. Productivity Enhancement and Training (PET)

The PET initiative gives users access to computational experts in many HPC technology areas. These HPC application experts help HPC users become more productive using HPCMP supercomputers. The PET initiative also leverages the expertise of academia and industry experts in new technologies and provides training on HPC-related topics. Help in specific computational technology areas is available providing a wide range of expertise including algorithm development and implementation, code porting and development, performance analysis, application and I/O optimization, accelerator programming, preprocessing and grid generation, workflows, in-situ visualization, and data analytics.

To learn more about PET, see HPC Centers: Advanced User Support. To request PET assistance, send email to PET@hpc.mil.

5.2. User Advocacy Group (UAG)

The UAG provides a forum for users of HPCMP resources to influence policies and practices of the Program; to facilitate the exchange of information between the user community and the HPCMP; to serve as an advocate for HPCMP users; and to advise the HPC Modernization Program Office on policy and operational matters related the HPCMP.

To learn more about the UAG, see HPC Centers: User Advocacy Group (PKI required). To contact the UAG, send email to hpc-uag@hpc.mil.

5.3. Baseline Configuration Team (BCT)

The BCT is tasked to define a common set of capabilities and functions so that users can work more productively and collaboratively when using the HPC resources at multiple computing centers. To accomplish this, the BCT passes policies which collectively create a configuration baseline for all HPC systems.

To learn more about the BCT and its policies, see HPC Centers: Baseline Configuration. To contact the BCT, send email to BCTinput@afrl.hpc.mil.

5.4. Computational Research and Engineering Acquisition Tools and Environments (CREATE)

The CREATE program provides tools to enhance the productivity of the DoD acquisition engineering workforce by providing high fidelity design and analysis tools with capabilities greater than today's tools, reducing the acquisition development and test process cycle. CREATE projects provide enhanced engineering design tools for the DoD HPC community.

To learn more about CREATE, visit the HPCMP Create page or contact the CREATE Program Office at CREATE@hpc.mil. You may also want to access the CREATE Community site (Registration and PKI required).

5.5. Data Analysis and Assessment Center (DAAC)

The DAAC serves the needs of DoD HPCMP scientists to analyze an ever increasing volume and complexity of data. Their mission is to put visualization and analysis tools and services into the hands of every user.

For more information about DAAC, visit the DAAC website. To request assistance from DAAC, send email to support@daac.hpc.mil.