22 02 24 42 36 02 04 06 08 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 CONTENTS Cover Story All-Seeing AI ‘Facial Recognition’ for Cells Smart Eyes on the Street Computers Take the Wheel Modelling Human Vision Perfecting the Image The Challenges with Computer Vision Research In the Blood Online, in Touch Lessons from the Deep Seeing Red Interactive Excavation Technology in the Courtroom Personal Record Teaching and Learning Arts Faculty Launches Tech Lab AI for All Live Action Learning Knowledge Exchange Reading Between the Lines HKU Community Pharmacy Opens Shop Safety First People A Master of Materiality Books Chinese Impacts in Southeast Asia Unique Perspective ALL-SEEING AI Computer vision can tell us where to steer a car or what disease is present in a cell. HKU researchers are mining its potential, as well as working to make images more realistic and modelled on human visual systems. They also sound a note of caution that reliance on computers to make decisions may lead to bias, liability and other concerns. READING BETWEEN THE LINES SEEING RED 16 32 46 INTERACTIVE EXCAVATION UNIQUE PERSPECTIVE AI FOR ALL IN THE BLOOD A MASTER OF MATERIALITY
Computer vision can tell us where to steer a car or what disease is present in a cell. HKU researchers are mining its potential, as well as working to make images more realistic and modelled on human visual systems. They also sound a note of caution that reliance on computers to make decisions may lead to bias, liability and other concerns. ALL-SEEING AI COVER STORY HKU BULLETIN | MAY 2025 02 03
“We’re focussed on what cells look like, whether their appearance is normal or not, and whether we can identify any disease.” Professor Kevin Tsia Anyone suspected to have cancer wants to know the situation as soon as possible – is the diagnosis confirmed? Is the treatment working? Has it come back? But getting accurate information can be expensive and time-consuming. Labels, or biomarkers, have to be created before samples are screened, to tell the machine what to look for. But now, a much faster, more cost-effective solution using generative AI has been developed by Professor Kevin Tsia of the Department of Electrical and Electronic Engineering, in collaboration with clinicians from the Li Ka Shing Faculty of Medicine (HKUMed) and Queen Mary Hospital. Rather than trying to match cells to labels, the Cyto-Morphology Adversarial Distillation (CytoMAD) Computer vision has an important role to play in medical diagnostics. Building on a decade of work in the field, Professor Kevin Tsia recently deployed an AI-driven, ultra-fast imaging tool to speed up cancer diagnosis and, in a separate project, applied AI to decode the development of zebrafish from images and other data. ‘FACIAL RECOGNITION’ FOR CELLS takes pictures of the cells and uses AI to determine their shape, mechanical properties and molecular information. Anomalies or abnormalities are flagged for clinicians to investigate. CytoMAD was tested on about 10 lung cancer patients using both blood samples and tissue samples for clinicians to assess each patient’s tumour at the individual cell level and their risk of metastasis. The accuracy was on a par with existing detection methods, but faster and cheaper. Tens of millions of cell images from one patient could be processed in a single day. “Our technology is very similar to facial recognition technologies that are used to identify individual faces. In our case, we’re focussed on what cells look like, whether their appearance is normal or not, and whether we can identify any disease,” Professor Tsia said. “The key advantage is our ultra-high-speed imaging capability. By capturing a huge amount of cell images within a very short period of time, we generate enormous, invaluable data that trains our AI models to pinpoint diseases and abnormalities more effectively and efficiently. Our lab also engineered specialised computing hardware that further accelerates the AI computation of the data we collect.” Creating ‘cell atlases’ Professor Tsia’s work on imaging has been a decade in development, following foundational work such as capturing the dynamic communication of individual neurons in the brain of a mouse that was awake and imaging blood cells racing through the brain at thousands of images per second – this is the kind of speed that his super-fast imaging technology can operate at. In addition to CytoMAD, he recently also made advances using AI technology to decode the development of zebrafish embryos from a single cell to a complete organism. The zebrafish shares about 70 per cent of its genes with humans so it is of much interest in the scientific community. Professor Tsia and his team, which included biomedical scientists from HKUMed, developed a large-scale computational framework called StaVia that can process genomic, proteomic and other ‘omics’ information from cells, as well as their morphology, to map in detail how cells change and interact over time, which could help explain disease progression. They worked with one of the largest datasets in the world, from the Chan Zuckerberg Biohub, processing millions of cells and creating catalogues for each cell to uncover the zebrafish’s embryonic development. “Our tool works very well with large-scale cell atlases, maintaining all the intricate details embedded in omics data, to make it easy and intuitive for scientists to understand complex cell behaviours, from fertilisation to the complex formation of organs,” Professor Tsia said. Hurdles to overcome The focus for StaVia at the moment is on developmental biology, given it is a more defined field than cancer, but they also hope it can be applied to diseases. An important feature of StaVia is its ability to do trajectory prediction, meaning it could be harnessed to predict the course of diseases and ageing over time. “Our hope is that one day, CytoMAD and StaVia could together be a one-stop solution for imaging and doing computations to better understand human disease,” he said. There are some hurdles to overcome, though. Although CytoMAD can process one patient’s blood and tumour samples in one day, this is still rather slow for larger scale clinical trials. If hundreds of people are involved, it would take the team hundreds of days to process using their present equipment and processing power, so they are looking for partners in the industry and elsewhere to help them scale up. Professor Tsia added that the work would not be possible without the involvement of clinicians and biomedical scientists. He is also Programme Director of HKU’s Biomedical Engineering programme and has seen enthusiasm grow for establishing more partnerships. “I have benefitted from cross-faculty, cross-discipline collaboration and I am keen to promote it on campus. HKU researchers are very open-minded and willing to try new things. I strongly advocate to our students that they should embrace these kinds of opportunities. There is a lot of innovation to be unlocked,” he said. The Cyto-Morphology Adversarial Distillation uses AI to perform real-time ‘facial recognition’ of various human cell images. COVER STORY HKU BULLETIN | MAY 2025 04 05
“If you want to energise the urban space, then how people perceive that destination is important.” Professor Qiu Waishan Urban planning that fails to account for human perceptions often results in environments that disadvantage vulnerable population groups. For instance, if you stand on a tree-lined urban street at lunchtime on a sunny day, the greenery will be a welcome canopy of shade. But stand at that same Waishan, Assistant Professor in the Faculty of Architecture, is working to address that and other urban planning issues through computer vision tools that bring human perspectives into consideration. “People’s perception of urban spaces has been overlooked in terms of how this affects individual decisions and contributes to a city’s economic prosperity. For instance, it can affect whether someone wants to buy a house or rent a workspace there or go out at night. If you want to energise the urban space, then how people perceive that destination is important,” he said. Until recently, it was nearly impossible to measure and incorporate people’s perspectives and behaviours. Early attempts by researchers involved manually tracking individuals on the street, a time-consuming process that, without supportive technologies, limited observations to narrow areas in single cities. But about six years ago, when Professor Qiu was a researcher at the Senseable City Lab of the Massachusetts Institute of Technology, he realised that computer vision programs could perform those observations rapidly and across multiple places at once. Windows, human scale and variety He set up a website to get baseline data by asking people for feedback on street pictures, such as whether a scene made them feel safe or was aesthetically appealing. That input was used to create labels of street features and establish a training dataset to predict how people might perceive a given city street and how those perceptions might affect behaviour. The model was then used in real life. For instance, a study on housing purchases and rentals in Shanghai showed that quite apart Streets, sidewalks and roads which constitute more than a quarter of the built-up land in most cities, should be designed for people, but urban planning has typically not incorporated human perspectives. With the aid of urban big imagery data and AI, Professor Qiu Waishan is helping to correct this. Night vision Moreover, the shadows and coverage provided by trees can affect a sense of personal safety, as the example of women at night illustrates. He is also working to address that issue by applying computer vision and artificial intelligence to predict nighttime urban spaces based on daytime images in his dataset. “This is ongoing research, so we are still collecting paired day and night street view images, but the results so far are quite amazing. Our aim is to produce a model that can help urban designers and planners around the world to predict their local nighttime environment,” he said. He is also trying to predict people’s behaviour to inform better planning. Data collected through computer vision can show such things as the kinds of shops that attract more pedestrians and whether special equipment encourages people to exercise. The behavioural data is collected carefully to protect privacy – faces are blurred and the focus is on crowd behaviour, not individuals, he said. Professor Qiu has also used his model to show how urban visual quality can attract innovators. He compared innovation hubs in London, Boston, Seattle and San Francisco with Shanghai’s Zhangjiang Science City. The other global hubs scored 7.0–7.5 for visual quality, while Shanghai’s uneven and polarised urban development meant some established parts scored 7.5 while others only about 2.0. Along this line, visual consistency is a fourth factor affecting street visual quality, in addition to transparency, human scale and complexity. “We need to pay attention to homogenising building façades. Every city wants landmarks, but if every building is trying to be the region’s rockstar, it can look very distracting and overwhelming, making people feel anxious,” he added. spot at 11pm and the shadowy trees feel altogether more sinister, especially if you are a woman. Reconciling such discrepancies is a challenge for planners, who typically have access to mainly day-time views from sources such as Google Maps. Professor Qiu SMART EYES ON THE STREET from the usual external factors of location and accessibility to parks and schools, perceptions of better visual quality as measured by the model coincided with higher housing prices. This effect had not been tested before. Professor Qiu’s research has also identified factors that provoke positive feelings in a street environment. Transparency of the street’s building façade – essentially how many large windows there are at street level – matters. So does the ‘human scale’ of the street, meaning the ratio of building heights to street width, which is best at a one-to-one ratio. Street complexity is also important, in terms of a variety of trees, signposts, lights, outdoor settings and spots for people to gather. However, perceptions of visual quality can depend on what a person is doing on that street. A cyclist may prefer lots of trees, while a pedestrian will favour wide sidewalks and places to meet up and socialise. Professor Qiu’s research also found that while people feel safer when there are others on the street, this is not necessarily correlated with less crime – crimes of opportunity, especially theft, can increase with more people. COVER STORY HKU BULLETIN | MAY 2025 06 07
The technology behind embodied AI and autonomous cars has made huge strides in recent times, helped in part by a system devised by HKU Professor Li Hongyang and his collaborators. COMPUTERS TAKE THE WHEEL “We are the originators of the technical roadmap for endto-end pipeline technology, which has provided the industry with a prototype verification to follow up on.” Professor Li Hongyang Self-driving cars are no longer a vision of the future. They are already operating in some streets. Google’s Waymo, which develops self-driving vehicles, is up and running in several US cities and this year started testing in Japan. Goldman Sachs predicted in August 2024 that partially autonomous cars would comprise 10 per cent of new vehicle sales by 2030. Handing the wheel over to a machine might seem risky, but it has been made feasible thanks to advances in computer vision and the development of new AIbased systems to process visual data, such as that developed by Professor Li Hongyang, Assistant Professor in the HKU Musketeers Foundation Institute of Data Science and a member of OpenDriveLab. In 2023, Professor Li and his team unveiled Unified Autonomous Driving (UniAD), which combines different modules, such as image recognition and action planning, into one end-to-end network. Usually, these modules function separately, but when combined, they are much more effective and accurate at processing visual signals. The research won the best paper award at the prestigious 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition and its end-to-end pipeline approach has since been widely adopted by several renowned automakers globally, generating a wide spectrum of customised versions of endto-end driving solutions in various scenarios. Adopted by carmakers “We are the originators of the technical roadmap for end-to-end pipeline technology, which has provided the industry with a prototype verification to follow up on,” Professor Li said. “Private companies will never say that they have adopted methods developed in academia, but Tesla now uses this kind of technology. So does Waymo and a number of car manufacturers in Mainland China, such as Huawei and Xiaomi.” In fact, he is collaborating with such a manufacturer to test the latest version of UniAD in vehicles using the platform World Engine. In the traditional approach, there is a multi-step process where the camera takes images that are interpreted based on manually-input rules, the planning module then decides the trajectory of the vehicle, and the execution module takes the appropriate action. With UniAD, this is all integrated into one flow of action by using AI to train the model in recognition and response, and to optimise features, without any explicit design or intermediary. The team did a detailed comparison between UniAD and conventional systems and found that the relative Picture of a robot performing a household task from OpenDriveLab’s AgiBot World, an open-source platform to advance humanoid robot training and development. gain or improvement in various tasks ranged from 20 per cent to 100 per cent, depending on the task. Eyes on robotics Despite that, though, Professor Li said there are still obstacles to the widespread use of autonomous vehicles, in particular regulatory and liability issues. His team and collaborators have therefore been looking at how to apply their know-how in another kind of system – robotics. Autonomous vehicles and robotics are similar in that they both involve movable rigid objects that can embody AI. “As AI advances and is able to recognise objects, the next step is to physically interact with the environment. Robotics naturally fit in this category, so we are working on topics like humanoids and manipulation, including our new AgiBot World platform, which is a large-scale dataset for robotic manipulation,” he said. A recent project is training a robotic arm to recognise objects on a table and tidy them up in an organised way, similar to how humans would perform the task, rather than simply grabbing at the objects. This is not yet the humanoid robot that many envision could be a playmate or personal maid, but it is a necessary step towards developing an intelligent robotic system for AGI 2.0 – the next generation of artificial general intelligence. “Robotics is still at a very early, preliminary stage. Robots can only move something from one place to another from a fixed position. The key challenge is in making robots mobile and able to perform tasks like a human would. You have to train them with a large amount of data that is very diverse and challenging, and it may be 5 or 10 years away before we can do this. But there are a lot of researchers and start-ups working on it,” he said. That includes Professor Li and his team, who hope that their approach of using computer vision and an endto-end pipeline will yield new discoveries in the field. COVER STORY HKU BULLETIN | MAY 2025 08 09
Professor Andrew Luo’s work is deepening understanding about how the human brain processes and understands images, which has implications for the development of machine vision. MODELLING HUMAN VISION “Semantically self-supervised models are more robust to visual dissimilarity in the world because of how they classify objects. They can learn more flexible representations, potentially in a more similar way to humans.” Professor Andrew Luo In every waking moment, our eyes take in an enormous amount of information from our environment – people, places, things, animals, events. But how does our brain identify and organise these images, and single out the important ones? That question is of interest to both neuroscientists, who want to better understand the workings of the brain, and computer scientists, who want to apply that to computer vision. Professor Andrew Luo, Assistant Professor in the HKU Musketeers Foundation Institute of Data Science and Department of Psychology, comes from both backgrounds and he has produced insights that enrich our knowledge about visual processing. “I study a region of the brain called the visual cortex, which processes everything we see. To study this in the past, you would have had to recruit graduate students and do a lot of experiments to show images to human subjects and record their brain activity. But now, we can do data-driven studies using tools like generative image diffusion models and large language models in a way that is accelerating scientific discovery,” he said. His own work uses functional magnetic resonance imaging (fMRI) to record the brain as it responds to thousands of random images. This is then fed into computer tools, such as image diffusion models and large language models, to identify semantic trends and understand which kinds of images activate which areas of the brain. Survival needs A key finding is that the organisation of the visual cortex corresponds to human evolutionary needs, with one component responding to bodies and faces, another to place areas and physical scenes, and a third component to food. “To survive, a person needs to recognise friends and family, they need to know where they need to go, and they need to find food. This is an exciting observation because it finds that the visual cortex is strongly ecologically driven by these survival needs,” he said. The results have enabled him to stimulate brain activity in specific ways by generating images with different attributes. They have also enabled him to develop computer tools based around the brain’s image processing. One tool he developed, called BrainDIVE (Brain Diffusion for Visual Exploration), creates images predicted to activate specific regions of the brain, having been trained on a dataset of natural images paired with fMRI recordings. This bypasses the need to handcraft visual stimuli. Another tool, BrainSCUBA (Semantic Captioning Using Brain Alignments), generates natural language captions for images, which can in turn be used to generate new images. Finally, a tool called BrainSAIL (Semantic Attribution and Image Localization) allows for the disentanglement of complex natural images. The combination of image and language builds on findings from the past decade that show vision models are best able to predict the areas of the brain activated by certain images when they are ‘self-supervised’ and left to their own devices. For example, such models are better at distinguishing between images of a dog and a cat if they can figure it out themselves rather than be told what to look for in a dog and a cat. More robust They are also better at semantic coherence, meaning they can understand when objects are related even if their individual components seem a little off. For instance, the red colour of pepperoni on pizza could lead models based on visual similarity, without the semantics component, to conclude the pizza was uncooked. “Semantically self-supervised models are more robust to visual dissimilarity in the world because of how they classify objects,” Professor Luo said. “They can learn more flexible representations, potentially in a more similar way to humans.” Vision is not the only input that could generate such outcomes. Professor Luo has also been trying to integrate sound into generative models to improve their representations. But vision is still his key focus, given it is the most dominant of the human senses. He hopes to explore the hierarchy of perception and apply his findings to the brain-computer interface. “I came to HKU in last September and it’s a great place because there are a lot of people doing cross-disciplinary work in data science or machine learning combined with another field. “Going forward, I hope to develop better tools to understand the human brain and to use the human brain to interact with, for example, brain-controlled robotics. I also want to leverage insights from human cognition and psychology to design better models,” he said. COVER STORY HKU BULLETIN | MAY 2025 10 11
HKU’s Visual AI Lab is developing models to comprehend images from the real world and create new visual content in 2D, 3D and 4D in a more accurate, efficient, accessible and cheaper way than current commercial models. PERFECTING THE IMAGE THE LONESOME “The worst scenario is when a model has very high confidence in its predictions because if it encounters something outside its learning, it may confidently misclassify it.” Professor Han Kai The AI program Midjourney can generate all kinds of images, but it famously struggles with the human hand. Fingers may be missing or added, fingernails absent, and unnatural poses struck. The problem, says Professor Han Kai, Assistant Professor in the School of Computing and Data Science and Director of the Visual AI Lab, is in the programming. A massive amount of data is fed into AI-powered image generators to train their billions of parameters, but the The model is also being trained to admit when it cannot recognise an image outside its training distribution. This is called open set recognition, a subset of open-world learning, and it is important for averting the mislabelling of images – imagine an autonomous vehicle that receives mislabelled information about roadworks or people on the street. Even slight distortions of images can mess with the predictions of an over-confident model. “The worst scenario is when a model has very high confidence in its predictions because if it encounters something outside its learning, it may confidently misclassify it,” Professor Han said. Better to make conservative predictions than wrong ones. Open source aims His second area of investigation is in reconstructing objects and scenes in 3D or 4D from text and single images. This is tricky work and closely related to his third area of study, generative AI in image creation. While some private firms already provide text-to-3D content generation, such as Common Sense Machines, universities and other users have to pay to use them and users often need special expertise. Professor Han is therefore trying a different approach. His aim is to produce high-quality images on a limited budget and processes by which they produce content remain a black box – not only to outsiders but, it would seem, to the model developers themselves. “When existing commercial models were initially trained, these problems weren’t considered, or the right parameters were not injected into them in terms of following geometric, physics and other basic laws,” he said. Professor Han and his team are working to address that. “In our Lab, we are trying to build models that can actually understand and reconstruct our open world. We want to have a principled understanding of how our models work so they are not just making a random guess.” His work covers four overlapping areas: understanding and creating images of things, even if the model has never seen them before; creating 3D content; advancing the use of generative AI in image creation; and developing the capacities of existing foundational models. make this open source for anyone to use with minimal training. For instance, users could extract an object from one image and place it in a different environment simply by typing in some text. The model could find application in things like animation, gaming, education and even creating one’s own avatar to upload to TikTok, although that is still some way in the future. Professor Han is also working with existing models – his fourth area of focus – to see if they can perform tasks other than those they were designed for, such as matching images to text. He and his team have been testing the capabilities of various models and found Google Gemini 1.5 Pro, GPT 4.0 and Claude’s Sonnet all performed ‘reasonably well’, but still had improvements they could make. Another ongoing study has set up different AI models to play games against each other, such as chess and Pong, in order to see if their decision-making processes can be discerned. Professor Han said his Lab’s findings could ultimately have applications in autonomous driving, drug discovery, medical image analysis, surveillance, as well as gaming and entertainment – anything that relies on images. “We are trying to figure out what the key problems are and develop solutions. There are still a lot of problems to address in this space,” he said. The text-to-image generation model developed by Professor Han and his team is capable of learning a visual concept from a set of images, encoding it as a visual token ‘S*’, and then regenerating it in diverse, previously unseen scenarios. Open-world training In the first area, Professor Han and his team have developed openworld learning techniques in which the model independently discovers and correctly applies new concepts to identify categories of objects that it has not seen before. For instance, if trained on dog and cat images, it can separate birds as their own distinct category. This is unlike the closed-world training of most other models, which only recognise objects in categories they have trained on. These models also struggle with streams of images and with varying representations, for instance, failing to understand that paintings or emojis of cats are still cat images. In open-world learning, the model still learns from pre-labelled data but then transfers that knowledge to newly encountered data. In the example of cats and dogs, it knows they have four legs, so it recognises that two-legged birds may be their own category. COVER STORY HKU BULLETIN | MAY 2025 12 13
“If we apply computer vision systems to public policy and criminal procedure, then there is going to be a concern about making sure the systems are performing roughly equally among all relevant subgroups.” Professor Boris Babic The power of computers to ‘see’ – to recognise visual patterns – is being tapped for all sorts of purposes, from enabling autonomous vehicles, to providing fast and comprehensive analysis of medical image results, to recognising faces. But there are inherent traps that can undermine trust in the technology. Bias and discrimination, liability issues, and corporate surveillance are all areas where computer vision has raised red flags, says Professor Boris Babic, joint The problem is likely down to the training data used, but there are real-world consequences. “One obvious issue is that if we apply computer vision systems to public policy and criminal procedure – basically any sensitive decision-making context that requires allocating scarce resources or imposing costs like criminal punishment – then there is going to be a concern about making sure the systems are performing roughly equally among all relevant subgroups,” he said. However, this may be a bigger problem in societies with a history of racial segregation, such as the US, since discrimination and bias may already be embedded in the data used to train computer vision systems there. Corporate surveillance concerns A wider issue is liability, particularly in the case of autonomous vehicles. While manufacturers could be held accountable, what happens when an accident is triggered through shared data? Think in terms of something like the digital navigator Amap in China, which is used by the automaker BYD, or an equivalent in other countries. “If every company is developing its own AI system and its own data, and everything is closed to others, maybe it’s easy to determine liability. But how do you re-conceive responsibility when you have multiple subsystems contributing to a decision in a way that’s going to make it increasingly hard to decipher which part of the decision belongs to which subsystem?” Professor Babic said. Some car companies have started experimenting with insurance coverage for autonomous cars, but the question probably will not be settled until there are compelling test cases in court. Allowing computers to identify, use and store visual images such as people’s faces, travel routes and medical scans, and to make decisions based on that, raises legal and ethical concerns. Professor Boris Babic explains. THE CHALLENGES WITH COMPUTER VISION In any case, the collection of data by companies raises the problem of corporate surveillance. Carmakers and many other companies, such as phone manufacturers and online browsers, collect user data ostensibly for functionality. But what those companies can then do with the data is not really addressed by privacy protection laws, he said. For example, should insurance companies be prevented from using a driver’s recorded driving speed to set their insurance premiums? If yes, what assurance can there be that the data is not shared with adjusters in the company? “I think corporate surveillance and espionage get way too little attention. Most of the attention is on government or police surveillance, which is low-hanging fruit because their cameras are often conspicuous and you can visit government contract websites to see what they are ordering. Associate Professor at the HKU Musketeers Foundation Institute of Data Science, Department of Philosophy, and Faculty of Law. And while some of these issues have been widely acknowledged, others are not receiving sufficient attention. Much of the focus in popular and academic literature has been on the bias problem. For instance, computer vision reportedly has difficulty distinguishing between faces of black people and between men and women. “Whereas with corporate data, we have no idea how their models are updated or what happens with a lot of this data, or how it’s combined with other data sources. If you have a self-driving car with radar and imaging systems, they can track your every movement, but it’s not clear where the data is going,” he said. Explainability might not improve things Other worrying examples of corporate use are medical scans and other health data, which, if accessed by insurers, could affect a person’s premiums. New laws or legal approaches might potentially address liability and surveillance. But when it comes to bias and fairness, the response has been driven by engineers: open the black box of AI and try to fix the problem there. Intriguingly, Professor Babic is not in favour of this approach. “Computer vision is a paradigm example of a black box system because it is high-dimensional and the features are not intuitive. There has been a large area of research on how to make these systems explainable or transparent, but I think we should accept they are doing something quite different from how our own brains process and recognise images. Rather than attempt to make them understandable to us, we should instead focus on their performance. Because explainability does not necessarily make performance better,” he said. Accepting that the technology operates differently from humans could also help sharpen focus and resources on the important issues of liability, whether decisions are autonomous or informed by human judgement, and how the data is used. “We should be looking at what it is doing and whether it is improving decision-making in society for whatever context we’re appropriating it for,” he added. COVER STORY HKU BULLETIN | MAY 2025 14 15
“It has been a remarkable achievement with the oral formulation of arsenic trioxide that has led to improvement in patients’ survivals and quality of life.” Professor Harry Gill Scientists at HKUMed have invented a groundbreaking regimen – an oral formulation of arsenic trioxide (oral-ATO, or ARSENOL®) – for the treatment of acute promyelocytic leukaemia (APL), a blood cancer which has a high fatality rate. They have been developing the treatment over the past 25 years, and it is the first prescription medication invented and manufactured in Hong Kong to obtain European, US and Japanese patents. There are three parts to the treatment – oral-ATO, all- trans retinoic acid (ATRA) and ascorbic acid (AAA) – the first is a solution and the second two are in pill form. Significantly, it treated with ATRA and chemotherapy or with the intravenous formulation of ATO that carries higher risks of treatment-related side effects and impairments to quality of life. “Not only is this treatment less harsh but patients are not exposed to risk of infections such as toxicity or damage to the heart, liver or kidney. Nausea, vomiting, hair loss – all of these impairments to quality of life can be avoided or minimised. Side effects are usually very mild – grade one and grade two in medical terms, such as mild headache, possibly mild gastric discomfort.” Professor Gill has worked on the treatment since 2007 with the inventors of its initial formulation Professor Cyrus Kumana and Professor Kwong Yok-lam, and initially conducted pre-clinical and phase I studies. Next came the clinical development with phase II studies, and the establishment of Good Manufacturing Practice-grade oral-ATO and obtaining the European Medicines “A five-year survival rate refers to the proportion of patients surviving at five years following the diagnosis of APL. That essentially means these patients are cured. Before oral-ATO was available, the treatment was with ATRA and chemotherapy alone and survivals were between 60–70 per cent at five years taking into account early deaths that occurred during the initial phase of treatment.” Less trauma The all-important chemo-free aspect makes the treatment less traumatic for patients. “Since 2018 we have been using a completely chemotherapy-free regimen,” said Professor Gill. Conventionally, APL is A remarkable new treatment for leukaemia, invented and developed at HKU, is chemo-free and has a 97 per cent success rate, offering new hope and less treatment trauma for patients. IN THE BLOOD Agency (EMA) and US Food and Drug Administration (FDA) designations through clinical research in APL. “During our first study between 2002–2017, we treated patients who had relapsed APL. At that time, such patients would require a bone marrow transplant, which involved chemotherapy, discomfort and potential complications. This treatment obviated the need for a transplant in relapsed APL. The findings of this study were published in 2018.” Remission maintenance Subsequently, the HKUMed research team started to use oral-ATO as the maintenance treatment for APL patients in first complete remission. The findings from this were published in 2020. The next step was the incorporation of oral-ATO into frontline treatment for newly diagnosed APL patients, achieving 100 per cent LFS and OS at five years. The findings were published in 2019. Since 2018, the chemotherapy-free oral-AAA regimen has been fully implemented and initial findings were presented at the American Society of Hematology annual meeting in December 2023. Oral-ATO (ARSENOL®) is now available in Hong Kong, Macau, Singapore and Malaysia as part of clinical trial or Professor Harry Gill (right) and Professor Kwong Yok-lam (centre) introducing oral-ATO to the press. routine clinical use for APL. It has obtained orphan drug designation from both the US FDA and the EMA, as well as the Investigational New Drug designation from the US FDA. As a result of these designations, oral-ATO will be used in Europe and North America as part of phase III international studies in 2025. Additionally, an investigator-led phase III study will be conducted in the UK in collaboration with the University of Cardiff. The team’s aim is to complete the phase III clinical studies in APL and also continue researching its efficacy for other diseases. “We are testing on other malignancies,” said Professor Gill, “and have seen efficacies for mantle cell lymphoma, NPM1-mutated AML, and TP53-mutated AML – but so far, we have not published it as we are applying for patents. We have done pre-clinical work for autoimmune disorders such as systemic lupus erythematosus and rheumatoid arthritis showing positive findings.” Summing up, Professor Gill said: “It has been a remarkable achievement with oral-ATO that has led to improvement in patients’ survivals and quality of life. Translating research into improvements in patient care is our ultimate aim.” is a chemotherapy-free regimen, meaning far less discomfort for patients, and it can be administered on an outpatient basis with no need for hospitalisation. Lead investigator Professor Harry Gill, Clinical Associate Professor in the School of Clinical Medicine of HKUMed, said: “The ‘AAA’ regimen is a game-changer – highly effective and safe for all risk categories of APL patients. Oral-ATO is used first to treat patients with APL and then becomes the maintenance treatment for patients in first complete remission, resulting in a 97 per cent five-year leukaemiafree survival (LFS) rate and a 97 per cent overall survival (OS) rate. RESEARCH HKU BULLETIN | MAY 2025 16 17
“The internet can be a valuable tool which offers older people opportunities for social connection, access to services, and engagement with resources that might otherwise be difficult to obtain.” Professor Zhang Qingpeng A comprehensive international study undertaken by a team from HKUMed and including longitudinal studies on elderly people, found that internet use among adults over 50 years old is consistently associated with better mental health outcomes. ONLINE, IN TOUCH including the US, England, Europe and Israel, China, Mexico and Brazil. Each cohort follows a similar longitudinal design involving nationally representative samples of middle-aged and older adults and contains detailed information on physical and mental health, socioeconomic factors, and lifestyle behaviours. By combining data from these multiple international studies, the researchers were able to analyse the relationship between internet use and mental health in a large sample of individuals across diverse countries and contexts. “These datasets are available to researchers through public access or specific research collaborations, ensuring transparency and reproducibility in our study,” said Professor Zhang. Regular usage The researchers categorised internet use based on frequency into several levels: ‘never’, ‘less than weekly’, ‘weekly’ and ‘daily’. “The regularity of internet use seems to enhance the potential benefits, as it provides continuous access to social connections, information, and services that can support mental wellbeing,” he said. “Specifically, individuals who used the internet more frequently – especially those who used it daily – reported better mental health outcomes.” Based on the findings, the team believe that tailored interventions leveraging internet connectivity hold the potential to improve mental health among middle- aged and older populations. Such interventions would need to be personalised, taking into account sociodemographic characteristics, health behaviours, physical health and genetic risk. “For example, using mobile health apps to encourage fitness and monitor wellness may be more beneficial for mental health in older adults without sufficient physical activity, as regular exercise has been shown to reduce depressive symptoms and improve life satisfaction,” said Professor Zhang. “These apps could provide reminders, track progress, and offer tailored exercise plans to motivate users. Further, telemedicine platforms and online consultations enable seniors to access healthcare remotely, making it easier for them to manage their physical and mental health.” E-learning opportunities “For older adults who are already physically active, other types of interventions, such as e-learning programmes or online hobbies, can be used to engage them mentally. These could include online courses that encourage lifelong learning, virtual arts and crafts workshops, or forums where they can explore new interests and passions. Such activities stimulate cognitive engagement, promote creativity, and provide opportunities for social interaction – all of which are important for maintaining mental health in later life.” Online social platforms also allow them to maintain relationships with family, friends and support networks, reducing feelings of loneliness and isolation. While the team have not yet begun developing related intervention programmes, they view this as a promising avenue for future research. “It is one of the potential directions we may pursue, with the aim of translating our findings into practical solutions that can help improve mental well-being in middleaged and older adults,” said Professor Zhang. Depression projection Professor Zhang’s findings come at the same time as an economic modelling study on depression health – the first of its kind in Hong Kong – has revealed the city will face a significant and ongoing burden of depression from 2023 to 2032. The study, led by Professor Shirley Li Xue from the Department of Medicine and Department of Pharmacology and Pharmacy, estimates that Hong Kong’s average yearly healthcare cost linked to depression to be HK$2.51 billion, affecting more than 55,000 people. Finding ways to engage people positively is therefore becoming increasingly important for both the improvement of individual mental well-being and the reduction of this economic burden. These positive associations were observed in more than 87,000 people across 23 countries and, representing a wide variety of cultural and economic contexts,” said Professor Zhang Qingpeng, from the Department of Pharmacology and Pharmacy and HKU Musketeers Foundation Institute of Data Science. “We found that elderly people who engage online report higher life satisfaction and better selfreported health, as well as fewer depressive symptoms. “One of the key implications is that the internet can be a valuable tool which offers older people opportunities for social connection, access to services, and engagement with resources that might otherwise be difficult to obtain. Our study also suggests that promoting internet access and digital literacy among older populations could be a cost-effective strategy for improving mental health, particularly in regions where traditional mental health services are less accessible.” Previous research on the topic has tended to focus on specific regions with mixed findings. This study is one of the first to provide robust evidence on the relationship between internet use and mental health among older adults across a wide range of countries. The data came from six large, publicly available ageing cohorts, from high- or middle-income countries, RESEARCH HKU BULLETIN | MAY 2025 18 19
“Southern Ocean deep-sea fauna responded both to climatic change and to iron fertilisation in different ways, and this could have important implications for ocean-based climatic interventions…” Professor Moriaki Yasuhara “However, prior to the Brunhes Event, the species present and how they responded to environmental variability were different. The event thus marks a substantial turnover from colderwater to warmer-water species.” This result suggests that if future human-induced climatic warming weakens global deepwater circulation from the Atlantic through the Southern Ocean to the Pacific, a deep-sea biodiversity hotspot in the Southern Ocean may diminish or even vanish. Extreme environment Asked about the methodology used for the study, Professor Yasuhara explained that the understanding of long-term history such as climatic impact in a remote and extreme environment is not an easy job. However, one way of getting around this problem is to study fossil organisms, a key tool for understanding the complex biological history of Antarctica and obtaining glimpses of entire ancient ecosystems. Professor Yasuhara and colleagues leveraged the fossil record of ostracods – microscopic crustaceans whose shells are preserved in abundance, making them ideal for studying deep-sea ecosystems through deep time – that is, dating back tens of millions of years and beyond. They obtained the fossils from sediment cores taken from the Tasman Sea and documented the ecosystem response of these fossilised organisms to the most recent major shift in these astronomical perturbations and their accompanying climatic changes. “Components of fossil fauna showed substantially similar change – one to deep-water temperature and the other to dust flux changes,” said Professor Yasuhara. “This indicates Southern Ocean deep-sea fauna responded both to climatic change and to iron fertilisation in different ways, and this could have important Paleobiologists from HKU’s School of Biological Sciences recently completed a study on the subAntarctic zone in the Southern Ocean which revealed that temperature changes and food input have played distinct roles in shaping deep-sea ecosystems. The study was published in the journal Current Biology. The research was a collaboration between HKU and Princeton University in the US. Professor Moriaki Yasuhara and Ms Raine Chong from HKU’s School of Biological Sciences, the Swire Institute of “But our research indicates we should be cautious about taking such actions, especially if we are not very sure about their effectiveness and how long such mitigation will actually work sustainably and/or continuously. For example, if carbon will go back to the atmosphere in just 10 years or even 100 years or so, it won’t be worth doing.” Researchers acknowledge that the deep-time development of the Southern Ocean’s deep-sea ecosystem remains poorly understood, despite being a key region in global ecological, climatological and oceanographic systems, where deep water forms and biodiversity are unexpectedly high. Carbon dioxide modulators The Southern Ocean accounts for as much as 75 per cent and 40 per cent of global ocean heat and carbon uptake, respectively, and its currents are key modulators of global carbon dioxide flux. A breakdown of this current system will have significant negative effects on marine ecosystems from the surface to the deep seafloor. But, while acknowledging that concerns are growing for how climate change and the rise in ocean temperatures will affect even these, the deepest areas of the oceans, Professor Yasuhara pointed out that it’s crucial to have foundational knowledge of the deep-sea ecosystem and biodiversity, particularly if the intention is to alter it. “Even if the intention is good the outcome might not be,” he said. “Before taking any action of this kind, we need to have a fuller picture of what the consequences may be.” The team now plan to conduct hightime resolution research. “We want to ascertain if the results will be similar at, for example, the centennial/ millennial time scale. That is the time scale we are worrying about for the future,” said Professor Yasuhara. Marine Science, and the Institute for Climate and Carbon Neutrality, joined forces with Dr May Huang from Princeton’s Department of Geosciences. “We showed that the Southern Ocean deep-sea ecosystem we see today was established following the Mid-Brunhes Event – an important climatic transition which occurred around 430,000 years ago – and that the abundance of individual species waxed and waned, depending on sediment input from land, as well as dust concentrations,” said Professor Yasuhara. Fossil data has shown that over time deep-sea ecosystems have been affected by temperature changes and food input, raising questions about possible negative effects of intervention measures to mitigate against climate change. LESSONS FROM THE DEEP implications for ocean-based climatic interventions as it is possible that they could have negative impacts on the deep-sea ecosystem itself.” The interventions the team are concerned about are geoengineering technologies, referred to as oceanbased climate intervention, which includes proposals such as the removal of marine carbon dioxide with the aim of reducing future warming by putting and storing carbon or carbon dioxide in deepsea sediment under the ocean bed, where in theory it will remain stable because of the low-temperature and high-pressure environment. Professor Yasuhara said: “The intention basically is to enhance the surface primary production, for example through iron fertilisation, in order to increase carbon flux to the deep sea – or to put more organic matters such as crops and macroalgae into the deep sea. Images of ostracods – microscopic crustaceans – from the deep sea. (Courtesy of Jingwen Zhang) HKU BULLETIN | MAY 2025 20 21 RESEARCH
RkJQdWJsaXNoZXIy ODI4MTQ=