Internet Services
- Account and Access Management
-
Application and Data Management
- AIMS and Banner
- API Gateway and API Management
- Central Payment Gateway
- CityU Mobile App
- CityU Portal
- CityU Scholars
- Data Analytics and Business Intelligence Tools
- Data Requests
- Degree Auditing and Academic Planning (DegreeWorks)
- e-Recruitment
- Finance and Procurement System (Oracle e-Business Suite)
- Grade Reporting (i-Assessment)
- Mobile App Publication
- Research Project Management System (ERIC)
- Booking Systems
- Communication and Collaboration
- Cloud, Data Centre and Server Hosting
- IT Security
- Network Connectivity and Management
-
End User Computing and Desktop Management
- Cloud Storage
- Computer Set Up and Configurations
- Computers Management in Lecture Theatres and Classrooms
- Data Removal Service
- Desktop Management
- Express Terminals
- Robotic Process Automation
- Information Classification and Protection
- Mobile Device Management
- Open Access Terminals
- Online Survey
- Printer & Print Queue Management
- Remote Desktop
- Shared Network Drive
- Teaching Studios (Computer Laboratories)
- URL Shortener (ShortURL)
- Virtual Desktop Service
- e-Signature Service
- Printing and Scanning Facilities
- Software Licenses and Deployment
- Teaching and Learning Support
- User Communication and Training
- Web Hosting
- High-Performance Computing (HPC) and Research Computing
- Smart Campus and AV
- User Support
Contact Information
HPC Queue Prioritisation
The University provides centralized high performance computing (HPC) resources to support its academic and research communities. The existing HPC facility consists of CPU clusters that provides up to 1650 x86-64 Intel Xeon cores and 14336 2nd Gen AMD EPYC cores, as well as GPU clusters with 64 double-precision GPU cards and 8 single-precision GPU cards.
Users can submit their jobs under the Linux environment through the Slurm scheduler, or through the LiCo Web portal. All jobs will be run when resource is available on a first-come-first-served basis according to the fair-sharing mechanism setup in the scheduler. Users are grouped by their registration status at University, and currently:
- Academic staff are allowed to use up to 180 CPU cores on 5 computing nodes for 7 days, or 4 double-precision GPU cards on 2 nodes for 3 days.
- Students are allowed to use 4 CPU cores in a single node for 4 hours, or 2 double-precision GPU cards for 1 day.
Job queue configurations, scheduling policies, and fair sharing mechanism are reviewed regularly based on the demand and availability of resources, governed by the HPC Steering Committee.
Support & Contact
Special requests of additional computational resource will be considered case by case. If more computational resource is required, please send an email to csc.hpc@cityu.edu.hk with justifications.