We are delighted to announce a one-day meeting at the KEK Computing Research Centre, inviting a special guest, Dino Conciatore from the Swiss National Supercomputing Centre (CSCS). This event will serve as a valuable platform to discuss the latest trends and challenges in high-performance computing, data science, and scientific computing infrastructure. Furthermore, it will provide an excellent opportunity to build a stronger partnership between KEK and CSCS. We invite all interested researchers and technical staff to join us for an engaging day of presentations and discussions.
If you wish to stay at the KEK dormitory, please make a reservation through the KEK User Support System below.
https://www2.kek.jp/uskek/eng/visiting/
Please note that user registration is required for reservations. For more details, please check the information on the page. If you prefer a hotel, there are many options available around Tsukuba Station, and also a single option, i.e. Urban Hotel, within walking distance of about 2.0 km to KEK.
For information on how to access the KEK Tsukuba campus, please refer to the official page below.
https://www.kek.jp/en/access/tsukuba
This page provides details on bus services from Narita and Haneda Airports, access via the Tsukuba Express, and local bus and taxi information from Central Tsukuba.
To ensure a smooth meeting, please upload your presentation materials (PowerPoint or PDF) at least one hour before your session begins.
If you have a KEK Indico account:
Please log in to the event page (https://conference-indico.kek.jp/e/kek-cscs-2025), find your contribution in the program, and click on the "Material editor" link next to the title.
If you do not have a KEK Indico account:
You can upload your slides to the KEKCloud using the link and password below.
You are also welcome to create a KEK Indico account and upload your slides there if you prefer.
The talk duration listed in the program includes time for Q&A. For example:
These timings are a guideline. If you would like to continue your presentation beyond the allocated time, you may do so by shortening the Q&A portion of your session.
The meeting room is equipped with a large, approximately 100-inch screen for presentations. You can connect your device to the screen via HDMI or USB-C cables, which will be provided.
The power outlets are Type-A, which are common in Japan and China. While some attendees may have converters for Type-C to Type-A, we cannot guarantee their availability, so please bring your own if you need one.
eduroam is available across the entire KEK site. If you need a different network for the meeting, please let Go Iwai (go.iwai@kek.jp) know immediately so he can initiate the necessary application procedures.
The first session featured presentations from representatives of KEK (High Energy Accelerator Research Organisation) in Japan and CSCS (Swiss National Supercomputing Centre) in Switzerland, focusing on their computing infrastructure, research activities, and collaborative efforts in high-performance computing for scientific research.
Go Iwai welcomed participants to the KEK-CSCS Joint Meeting 2025, specifically inviting Dino Conciatore from CSCS. He began with essential logistics, reminding participants about WiFi access and the procedure for slide sharing via Indico or KEK Cloud. Iwai explained that the meeting was intentionally structured like a formal international conference, e.g., HEPiX, featuring proper badges and catering. He highlighted that the final session was dedicated to young technical engineers from the CRC, hoping it would serve as a full-dress rehearsal opportunity for their future participation in the international meeting. The agenda includes two facility tours: the KEKCC machine room in the morning and the Photon Factory and Belle II in the afternoon. He then introduced the first speaker, Prof. Nakamura.
Tomoaki Nakamura, head of the Computing Research Centre at KEK, provided an overview of KEK's facilities and research projects. He explained that KEK operates two main accelerators: SuperKEKB at the Tsukuba Campus and J-PARC at the Tokai Campus (approximately 60km away from the Tsukuba Campus). The Belle II experiment at SuperKEKB is a successor to the Belle experiment, which contributed to the Nobel Prize-winning work on CP violation predicted by Kobayashi and Maskawa. J-PARC produces neutrino beams for experiments like Super-Kamiokande and the upcoming Hyper-Kamiokande project. KEK is also involved in the development of the International Linear Collider (ILC), though this project faces funding challenges.
Nakamura detailed the organisational structure of KEK, which includes six institutes, with the Computing Research Centre (CRC) being part of the Applied Research Laboratory. The CRC provides critical IT support and computing technology for KEK's various projects with a relatively small team of 13 faculty staff, 8 engineers, and 3 senior fellows. Their focus is on scientific computing rather than administration, such as policy-making, which bridges the gap between the government and the administrative bureau.
The KEK Central Computing System (KEKCC) operates on a four-year replacement cycle due to Japanese government procurement requirements. The current system, launched in September 2024, demonstrates a 40% performance increase (in HS23) despite budget constraints and rising hardware costs. The CRC also supports distributed computing and identity federation for various experiments, including Belle II, and maintains important network connections for international data transfer.
Dino Conciatore from CSCS presented their computing centre in Lugano, Switzerland. CSCS employs 140 staff from 22 nationalities and operates the Alps supercomputer. Their facility features an innovative cooling system that utilises water from Lake Lugano at a depth of 45 meters, which maintains a constant temperature of 6°C year-round. This water is used to cool the supercomputer. Then, the still-cool water is used for other hardware. Finally, the warm water is used to heat nearby buildings before being returned to the lake.
CSCS has developed a “Science as a Service” concept utilising a versatile software-defined architecture that enables them to partition their supercomputer into smaller, isolated segments tailored to user needs. Their private cloud infrastructure is based on Kubernetes with Rancher for management and ArgoCD for continuous delivery. They currently run approximately 500 machines on Kubernetes across 50 clusters, supporting various services, including their Swiss Large Language Models, known as “Apertus”.
The first facility tour, guided by Koichi Murakami, visited the KEK Central Computing System (KEKCC), where participants saw the Linux cluster, storage system, including HPSS, as well as network hardware, including interconnect equipment such as InfiniBand. The participants also observed the cluster's near-realtime operational status using the Grafana dashboard.
Go Iwai presented details about the KEK Central Computing System (KEKCC), which was completely replaced last year. The system provides 12,000 CPU cores, 56 terabytes of memory, and 30 petabytes of disk storage. KEKCC operates under a multi-year rental contract system as required by the Japanese government, with complete replacement every four to five years. Despite flat budgets and rising hardware costs, they achieved a 40% performance increase (HS23) in the latest system.
Hideo Matsufuru discussed Lattice QCD (Quantum Chromodynamics) research at KEK. He explained that QCD is the fundamental theory of "strong interaction" between quarks and gluons, which is difficult to solve analytically. Lattice QCD discretises space-time to enable Monte Carlo simulations, generating valuable "gauge configurations" that can be used to measure various physical quantities. These simulations require significant computational resources due to the need to solve large, sparse matrix equations. Matsufuru highlighted the importance of high-performance computing for QCD research and the various parallelisation techniques used, including GPU acceleration. He also discussed the International Lattice Data Grid (ILDG) and the Japanese Lattice Data Grid (JLDG) initiatives for sharing valuable simulation data among national and international collaboration institutes.
Tomoe Kishimoto presented a study on applying the Foundation Model (FM) and transfer learning to event classification in collider physics, motivated by the goal of reducing the extensive computing resources required for Monte Carlo (MC) simulations in future experiments. Current Deep Learning approaches often require generating large, task-specific MC datasets for each analysis channel.
The proposed strategy utilises a Transformer-based model and employs a self-supervised pre-training phase on large amounts of unlabeled, real-world collision data from sources such as CMS Open Data. The self-supervised task is Masked Particle Modelling (MPM), where the model learns the inherent relationships among reconstructed objects by predicting which object was randomly replaced with a dummy one. Preliminary results demonstrated significant improvements in the Area Under the Curve (AUC) for Charged Higgs classification, especially when the target training dataset was small. This outcome validates the FM concept as a promising method for reducing computing resources, especially for insufficient statistical events.
Shogo Okada presented the successful application and technology transfer of Geant4, the Monte Carlo radiation simulation toolkit originally developed for High Energy Physics (HEP), to Radiological Science.
Okada highlighted two main areas of medical application. First, in Particle Therapy, the Geant4-based platform PTSim is widely used in Japanese hospitals for quality assurance of treatment planning, offering higher accuracy than conventional methods. Second, in Microdosimetry, the Geant4-DNA extension simulates radiation phenomenology down to the DNA scale, a process that is extremely computationally demanding, often requiring days to weeks of simulation time on CPU clusters.
To overcome these performance bottlenecks, the KEK team developed MPEXS, a state-of-the-art radiation simulator utilising GPGPU. MPEXS re-engineers the core Geant4 physics into CUDA, achieving massive speedups: up to x1,000 for standard EM physics and a notable x8,000 for the Microdosimetry extension (MPEXS-DNA). This acceleration makes high-accuracy Monte Carlo simulation practical for routine use in clinical and biological research.
The afternoon tour began at the Photon Factory (PF), where Kazuhiko Mase provided a broad introduction to the diverse scientific applications utilising synchrotron radiation, followed by a visit to the control room. The tour then proceeded to the Belle II experiment, where Ikuo Ueda and Hideki Miyake explained the current status of the detector upgrade at the Tsukuba Experimental Hall, where the Belle II detector is installed.
Hideki Miyake presented details about Belle II, which was originally designed for B-physics but encompasses a broader physics program, including tau physics and dark matter. The experiment aims to collect 50 times more data than previous experiments, corresponding to approximately 60 petabytes of data. Miyake described their collaboration, which involves over 1,200 members spread across Europe, Asia, and North America.
Miyake outlined their computing model, which involves storing raw data at KEKCC and distributing secondary copies to raw data centres in Europe and North America. The data undergoes various processing stages, from raw data to processed formats that facilitate physics analysis. He explained that they use two different DIRAC systems: BelleRawDIRAC for “Core Computing” and BelleDIRAC for “Distributed Computing”. For data management, they are utilising Rucio (originally developed for ATLAS) as their distributed data management system. This system handles data transfers and is being expanded to include metadata functionality, replacing their previous AMGA system.
The presentation detailed their core computing infrastructure, noting that while most components were installed on bare metal, they have recently begun using Kubernetes to host some core computing functionalities. Miyake mentioned they are evaluating DiracX, the next generation of DIRAC, which is designed as a cloud-native application that can be easily deployed in Kubernetes.
A significant portion of the discussion focused on technical challenges with their Kubernetes implementation. They are using K3s (a lightweight Kubernetes distribution) but are still finalising their design and addressing issues related to networking (discussing L2 mode vs. BGP mode) and persistent storage. Dino Conciatore provided technical advice on using Kube-VIP for cluster API access and MetalLB for application load balancing.
The presentation concluded with a discussion about storage solutions, during which Miyake mentioned plans to use MinIO as an S3-compatible object storage, but Dino suggested thorough testing first. Miyake summarised that they are deploying a production Kubernetes cluster in the coming months, with plans to eventually migrate everything in DIRAC to DiracX.
The session featured presentations from young technical engineers from the Computing Research Centre, focusing on various IT infrastructure and security projects. Go Iwai served as the session moderator, introducing each speaker and facilitating questions after their presentations. Throughout the session, attendees asked questions after each presentation, particularly Dino Conciatore, who inquired about broader topics, such as integration possibilities, security concerns, and technical specifications.
The first presentation was delivered by Sari Koike, who discussed the ccPortall, an integrated single registration platform that replaced paper-based application processes for 25 different IT services. Her team developed the system during COVID-19 in 2020. The portal streamlined workflows and enhanced digital experiences for both users and system administrators. Koike also demonstrated an AI-powered concierge featuring a kawaii voice library character named “Zundamon” that could answer questions in Japanese, with plans to expand its capabilities.
Konomi Omori presented on “Architecting Trust”, detailing the development of an Identity Provider (IDP) for the GakuNin federation. This system enables cross-institutional single sign-on authentication using Shibboleth middleware, allowing users to access multiple services with one set of credentials. The presentation covered the technical implementation, including two-factor authentication and plans to achieve Identity Assurance Level 2 compliance.
Jo Ueta discussed DNS firewall deployment strategy, explaining how the system works to block access to malicious domains without affecting legitimate websites sharing the same IP address. Ueta also covered approaches for logging DNS response IP addresses to identify which domains clients are connecting to, more effectively, comparing the dnstap and DNS-collector approach with the Python module approach. He concluded that the DNS-tap and DNS-collector approach is better for their usage.
Oka Sasaguchi presented on designing sustainable wireless network infrastructure at KEK, addressing challenges of maintaining service levels with a fixed budget while hardware costs increase. The presentation outlined the current system, which features approximately 340 wireless access points and three SSIDs, each utilising different authentication methods, and proposed solutions for the upcoming system replacement in 2027.
The final presentation by Hirofumi Maeda focused on facility maintenance and disaster resilience at KEK, covering initiatives aimed at ensuring business continuity during emergencies such as earthquakes and power outages. Maeda discussed the updating of automatic operation boards, resolving low-frequency noise pollution, and the implementation of KEK's Business Continuity Plan (BCP).
Go Iwai expressed his sincere gratitude to all participants for a productive meeting, extending special thanks to Dino Conciatore for his contribution. He also acknowledged the young technical engineers for their hard work and successful presentations in the final session.