20 02 06 42 28 38 34 02 04 06 08 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 CONTENTS Cover Story Game-Changer The Rise of AI The Threat of Harm AI for Social Good Better Access to Better Drugs Testing Innovation A Challenge for Creatives Research Teamwork at the Nanoscale Above and Beyond Shedding Light on the Dark Everything under the Sun Skin and Bones Weathering Extremes More than Skin Deep Labelled with Diamonds Teaching and Learning Outside-of-Class Action Educating for the Future A Head Start on Research Knowledge Exchange Art across the Disciplines Generations Connect People Passion, Enthusiasm, Commitment HKU’s New TEChno Honcho Pedal to the Metal GAME- CHANGER Whether you see it as good or bad, artificial intelligence (AI) looks here to stay, with profound impacts for almost every corner of human activity. Even before the arrival of ChatGPT, HKU scholars were probing its potential – from developing and adapting their own AIs, to exploring the potential for drug discovery and access, to considering the legal and policy implications for innovation. They have also been considering AI’s effects on creativity and its potential to harm. Needless to say, their explorations have only just begun. EDUCATING FOR THE FUTURE SHEDDING LIGHT ON THE DARK THE THREAT OF HARM MORE THAN SKIN DEEP ART ACROSS THE DISCIPLINES PASSION, ENTHUSIASM, COMMITMENT
GAMECHANGER Whether you see it as good or bad, artificial intelligence (AI) looks here to stay, with profound impacts for almost every corner of human activity. Even before the arrival of ChatGPT, HKU scholars were probing its potential – from developing and adapting their own AIs, to exploring the potential for drug discovery and access, to considering the legal and policy implications for innovation. They have also been considering AI’s effects on creativity and its potential to harm. Needless to say, their explorations have only just begun. Illustration by Midjourney This image was created using the prompts illustration, vector, embracing future with AI, and AI revolution. COVER STORY HKU BULLETIN | NOV 2023 02 03
The arrival of ChatGPT may have excited media and social media experts and the general public about the prospects of artificial intelligence. But to engineers, there was nothing new under the sun. The concept of AI has been around for decades. Back in 1950, the British scientist Alan Turing even offered a definition of AI: when a person is unable to tell if they are communicating with a human or a computer. And since then, says Dr Qi Liu, Assistant Professor of Computer Science who received an Honourable Mention in the global 2023 AI 2000 Most Influential Scholars list, the technology has evolved in waves, taking new steps every decade or two. First, it was based on a rules-based approach, then probabilistic or statistical methods, and then, when these were both found too limited to handle realworld scenarios, neural networks. The concept of neural networks, which underpins machine learning and consists of nodes passing information back and forth to each other until a consensus is reached and lessons are learned from mistakes, actually dates back to the 1940s. But it was out of fashion until about a decade ago, when it was used to recognise images, such as whether a photo was a dog or a cat. Since then, it has been widely explored and had game-changing effects in learning from vast amounts of text data. “ChatGPT 4 is like a simulated human brain. It can already do a little bit of reasoning and it understands human text well. Many people feel excited about this progress and want to build on top of it,” Dr Liu said. “But there is a lot of headroom for improvement. It is more like a starting point.” Developing capabilities His colleague, Professor Yiu Siuming, also in the Department of Computer Science, agrees. “Generative AI is not magic. We’ve been doing research on this for a long time. In fact, right now it’s very similar to Google. The difference is that rather than just giving you links, it gives you a summary. But later on, there should be more useful applications.” Both academics are working to develop those capabilities and address some of the shortcomings, such as the poor performance of generative AIs in handling mathematics and other non-textbased questions, hallucinations, possible misalignment with human values (for instance, providing instructions on making bombs) and huge energy consumption. Dr Liu, for instance, is trying to get AI to combine other modalities with text, such as recognising an image of Barack Obama and being able to answer questions about him. He is also working on the recognition of structured datasets such as charts and tables, which fall between text and images. The latter could be useful for businesses and already he has been approached by industry about its use in detecting money laundering. Dr Liu and his team are also trying to reduce ‘hallucinations’, in which the AI makes up facts and citations, by connecting it to better quality information, and to make AIs more energy efficient by reducing the quantity of data or length of time required to train them. Embracing it Professor Yiu, for his part, recently launched the spinoff Stellaris AI with his former PhD student, Dr Jacob Jikun Wu. They have developed an alternative ChatGPTlike system with hundreds of billions of parameters that is unique in using Cantonese. The model has shown promise where other models falter, such as mathematics and logic, and importantly, is not at risk from copyright or access issues related to overseas-owned AI. “We are now the only ones in Hong Kong to train a model from scratch without relying on OpenAI [which owns ChatGPT]. The next step is how to make use of this,” Professor Yiu said. The team are looking into making AI work as a personal assistant, for instance by checking flight availability and booking a seat, or buying stocks based on an individual’s financial background and acceptance of risk. “AI is here to stay and people should embrace it. Problems like bias and false information are not new, they exist in society and in other technologies, too,” he said. Dr Liu also believes everyone will have to adapt. “People are worried AI will replace their jobs. I think the tendency is not reversible. They need to embrace AI in their daily work to improve efficiency. That is a good thing. People can always find other things to work on that are more creative or less intensive,” he said, but he added: “We still need to be careful to avoid doing harmful things to human beings or society.” THE RISE OF AI “AI is here to stay and people should embrace it.” Professor Yiu Siu-ming ChatGPT and other generative artificial intelligence technologies burst on the scene this year seemingly all of a sudden. AI scholar Dr Qi Liu and Professor Yiu Siu-ming, developer of the new spinoff, Stellaris AI, explain where these technologies came from and where they are going. Illustration by Midjourney This image was created using the prompts abstract, simplicity, future, pathway, AI exploration, and embrace AI. COVER STORY HKU BULLETIN | NOV 2023 04 05
ChatGPT has a hidden prompt that begins before users enter their query. It asks the AI to be kind, ethical and helpful to the user. But there is no guarantee the subsequent information or actions taken will be similarly positive. With more large language learning models (LLMs) similar to ChatGPT coming online, there is a prospect of AIs being manipulated to dangerous ends. This is a worry of Assistant Professor of Philosophy, Dr Nathaniel Sharadin, who is also a Research Affiliate at the Centre for AI Safety in San Francisco. He originally was concerned about AI’s misalignment with human values, such as being hurtful or racist. He’s less concerned about that now – recent AI models have become better in this regard, he said – but that does not rule out the potential to do damage in future. “I’m more worried that these systems are much more capable than we really understand. And the capabilities for misuse are pretty stark,” he said. “Biological misuse is the most obvious example. LLMs are great at quickly finding promising drug targets, but they also need to know how to identify and formulate dangerous molecules to avoid harm. It doesn’t mean this is happening now, but it obviously increases the threat of biological or chemical misuse.” To illustrate what that threat might mean, consider that the French government asked drug researchers using a model specific to drug discovery, to seek out harmful targets as an exercise in what might happen. In less than six hours, the model found 40,000 potential chemical weapons, including some of the deadliest chemicals around, such as sarin. Lowering the bar “It’s important to say with some confidence whether an LLM can produce formulas for novel pathogens. For the record, I don’t think ChatGPT has the capabilities to do this at the moment because I don’t think it has enough of that training data. But it’s very clear that some models can and that would be very dangerous,” he said. One of the problems is the difficulty in understanding the capabilities of AI models, a question that Dr Sharadin has been exploring. There are two methods – benchmark against existing information or interact and poke around with the AI to test its limits. Both have major limitations for dangerous substances. For example, there is no benchmark for synthesising smallpox, so how can one know whether an AI can do this? And the interaction approach cannot provide systematic evidence of capabilities because every single capability would need to be tested. “Just because you can’t test it with a benchmark or interaction is not evidence that AI can’t do it,” he said. “Another worry is that you could have LLM assistants in the lab acting like a coach, to explain why something has failed. It lowers the bar of technical know-how for chemical and biological pathogens.” Despite the bleakness of the situation, and the rapid pace that AI is developing, Dr Sharadin believes it is still possible to address the lurking threats. While the current polarised world is unlikely to produce treaties as successful as those on biological and chemical weapons, within nations the solutions might be easier. A hard problem “It’s a hard problem, but the first step should be that companies do not train increasingly large, capable models and release them publicly because this is dual-use technology with a high capacity for misuse,” he said. “Model developers should also remove things like the repository of dangerous molecules from their training data, which they’re not doing. There is no reason why ChatGPT needs to see this in its training.” China has acted to rein in its companies, but in the US, which is the leader in AI development, there are no controls at all. “With the pace moving so rapidly, I’m worried that even though presentday models are not by themselves extraordinarily dangerous, there is increasing scale for that,” he said. The release of the open-source LLM, Llama 2, by Meta makes matters worse. Unlike ChatGPT, whose architecture and operations are controlled by OpenAI, any user can download Llama 2 and fine-tune it to their liking, such as training it to respond in nefarious ways. All that is needed is English language skills and a computer. Setting aside concerns about new chemicals and weapons, the effects of that capability will likely soon be felt by much of the world. “It’s certainly going to make a nuisance of itself. More than a billion people vote next year. If you think misinformation on the internet is a pain, wait till you see what can be done with a system that can produce realistic, human-tailored, micro-targeted misinformation in a heartbeat,” he said. THE THREAT OF HARM “I’m more worried that these systems are much more capable than we really understand. And the capabilities for misuse are pretty stark.” Dr Nathaniel Sharadin AI’s capacity to be harnessed for harm is an unknown – all the more reason to get better governance controls in place before it is too late, argues philosopher Dr Nathaniel Sharadin. Illustration by DALL·E of OpenAI This image was created using the prompts machine learning, biological and chemical synthesis, cyber vibes, spores and molecules, and photorealistic. COVER STORY HKU BULLETIN | NOV 2023 06 07
When Professor Victor Li On-kwok and Dr Jacqueline Lam Chi-kei of the Faculty of Engineering started working together more than a decade ago, their aim was simple: how can big data be harnessed for positive change in our societies? Professor Li brought information engineering expertise, Dr Lam urban planning. Their initial focus was on air pollution given its detrimental impacts on health and the environment and they began to work with various departments in the University of Cambridge. Emerging from that work have been ongoing collaborations that now also cover AI in Medicine, as well as the HKU-Cambridge AI to Advance Well-being and Society and HKU-Cambridge AI for Neurodisease research platforms. Over the past three consecutive years, for instance, they have been awarded an exceptional three Healthy Longevity Catalyst Awards by the US National Academy of Medicine (NAM) for applying AI and big data to the challenge of early detection and treatment of patients with Alzheimer’s Disease (AD). “Our aim has been to use AI- and data-driven research models and interdisciplinary approaches to improve people’s health and quality of life,” Professor Li said. “These models have become more powerful in recent years thanks to advancements in computational capacity and the availability of more big data.” Combing for biomarkers Their focus on AD has been inspired by the fact that around 50 million people worldwide currently suffer from AD and related forms of dementia, including 10 million people in China, resulting in irreversible brain damage. Despite the urgent need for treatment solutions, there is a lack of effective restorative treatments or preventative therapeutics for AD. One treatment, Aduhelm, was approved by the US Food and Drug Administration in 2021, but it has side effects and its effectiveness remains inconclusive. The research led by Professor Li and Dr Lam, with collaborators Professor Illana Gozes from Tel Aviv University, Dr Yang Han, and Dr Jocelyn Downey from HKU Engineering, aims to accelerate the search for more effective AD drugs by applying new causal AI techniques and domain-specific pathological knowledge. Their approach combines AI with neuroscience and immunology expertise and has led to a breakthrough in methodology by creating a biomedical graph that incorporates genetic mutations and pathological knowledge. This is expected to improve the speed and accuracy of drug discovery for AD. They are also preparing to develop a model to comb huge health datasets from the US and the UK for biomarkers, such as genetic, protein biomarkers, and linguistic markers, that could determine if the disease can be detected before the onset of symptoms. Linguistic markers, in particular, could detect if a person’s language is becoming impaired, which might be a very early signal of Alzheimer’s. The team hope to identify such new linguistic biomarkers for different languages. “Our linguistic team, including Dr Lawrence Cheung from the Chinese University of Hong Kong and Professor James Rowe from Cambridge, shall work with us to develop a standardised set of assessment tools so we can collect natural language data from patients while also making good use of the available linguistic data,” Dr Lam said. This work will be supported by their 2023 NAM award, with additional funding expected from other sources. Recently, the HKU team also established the HKU-Cambridge AI for Neuro-disease research platform with Professor David Rubinstein and Professor James Rowe of Cambridge, to apply their AI model to predict AD and possibly other neural diseases such as Parkinson’s and Huntington’s Disease. Air pollution alerts Other major achievements include a HK$50 million Theme-based Research project awarded in 2017 by the Research Grants Council that enabled them to estimate air pollution at the individual street level, thus allowing a fine-grained assessment of health impacts. This work was inspired by the limitations of the current air quality alert system. Hong Kong has only 18 air quality monitoring stations, but the air quality can vary a lot between them. The team sought to fill in the blanks by inputting other information related to air quality – such as traffic congestion, wind direction and speed, the presence of tall buildings, and temperature and humidity – to create 110,000 virtual stations that can estimate the air pollution on any given street. “Increasingly, our mission has been reinforced in different parts of the world where there have been increasing alerts on the effects of air pollution on health and mortality. For a while, attention was mostly on greenhouse gases, but now people realise the health impacts of air pollution have not gone away,” Professor Li said. “AI is just a tool and it can be very meaningful and powerful if we aim to make good use of it to improve the situation for weak and vulnerable people in society, while minimising its undesirable effects,” Dr Lam said. AI FOR SOCIAL GOOD Artificial intelligence models developed at HKU are helping identify early-stage biomarkers for Alzheimer’s Disease, as well as air pollution at the individual street level. “Our aim has been to use AI- and datadriven research models and interdisciplinary approaches to improve people’s health and quality of life.” Professor Victor Li On-kwok Illustration by Midjourney This image was created using the prompts brain, futuristic biomarkers, and neural networks. COVER STORY HKU BULLETIN | NOV 2023 08 09
Warfarin is a standard drug in Hong Kong to prevent blood clots in patients with atrial fibrillation. But it is not easy to use. Patients need to monitor their kidney function and regularly return to the clinic for checks. Yet, there are alternative drugs that do not require this monitoring. The major catch is that they are more expensive. In health economics, this is not just a dollar and cents dilemma. The ‘incremental cost-effectiveness’ considers the money side and the possibilities of better health with an improved quality of life and an extended lifespan. Applying this approach to warfarin, Dr Shirley Li Xue, a joint appointee of the Departments of Medicine and of Pharmacology and Pharmacy, and her colleagues showed it is cost-effective for Hong Kong’s Hospital Authority (HA) to go for the higher-priced drugs. This factor contributed to the HA’s decision to add these to the public formulary. “We used the cost of innovative medicine minus the cost of standard care divided by the difference in BETTER ACCESS TO BETTER DRUGS “I hope our work will help Hong Kong catch up in using real-world evidence and AI to identify newly available drugs for quicker and better decision-making.” Dr Shirley Li Xue Dr Shirley Li Xue of the Li Ka Shing Faculty of Medicine combines real-world data, simulation modelling and decision analytics to show policymakers, healthcare professionals, the pharmaceutical industry and the public how new, seemingly expensive medicines can achieve cost-effectiveness for improving health. quality-adjusted life years and compared this with the government’s willingness to pay for an additional year of perfect life. When it is below that threshold, although the drug is more expensive than conventional treatment, it can still be affordable and cost-effective,” she said. That study, published early this year, dovetails with Dr Li’s ambitious new project to bring a more transparent and rational approach to determining which medicines should be made accessible to patients who use the public hospital service. Currently, that service does not tend to systematically factor in local evidence of cost-effectiveness but waits for evidence from other jurisdictions. Providing evidence Dr Li’s three-year Research Impact Fund project aims to demonstrate the effectiveness of health technology assessment (HTA) for enlisting new drugs. HTA has been adopted by many countries, from the UK and US to Mainland China and India, but not yet Hong Kong. “Currently, we don’t know how drug enlisting decisions are made,” she said. “Value for money or HTA are not written into the guidelines. Drug enlisting decisions should be evidence-based, transparent and fit-for-local-needs.” Her project, called Horizon Scanning of the Medium to LongTerm Burden of Chronic Disease and Care Needs to 2030 in Hong Kong, aims to provide that evidence. It focusses on three therapeutic areas – mental health (specifically, depression), auto-immune disease (inflammatory bowel disease) and oncology (prostate cancer) – and involves experts in health economics, big data, pharmacology and pharmacy, computer science, social science and medicine from HKU, Queen Mary Hospital, the Chinese University of Hong Kong, London School of Economics and Political Science, Newcastle University, Fudan University and the Hong Kong Association of the Pharmaceutical Industry. The team are first collecting big data from populationwide de-identified HA electronic medical records to understand the disease burden and patients’ unmet needs. Second, they are using classical and novel simulation models including artificial intelligence to forecast the burden of these diseases over the next 10 years, such as incidence, mortality, and healthcare cost. And third, they are scanning and verifying the global horizon for innovative drugs using ScanMedicine, an AI-based global medical innovation database, crosschecking the potential candidates with clinical expertise, global drug manufacturers and ongoing clinical trials. Bridging the gap The information will be combined into a decision toolkit to aggregate the identified innovative drugs’ safety, efficacy and cost-effectiveness in Hong Kong. “Industry always strives for breakthrough therapies, but policymakers are sometimes puzzled about which one they should buy because resources are limited,” Dr Li said. “We want to bridge their dialogue and use healthcare big data, artificial intelligence projections and decision analytics to let policymakers know other options.” Dr Li said it was necessary to develop a localised HTA ecosystem because each place has its own healthcare system, needs and budget. HTA also improves transparency in decision-making. “We hope that with this project, we can make an important step towards establishing HTA in Hong Kong. We should at least reach a similar level to our neighbours in the Asia Pacific,” she said. “Fundamentally, our goal is to benefit patients.” Transparency is also built into her HTA project. She and her team will make the analytical results and research report available through a website and collect feedback from users. Once they have refined their decision toolkit, they aim to hold workshops for decision-makers, healthcare providers, clinicians, industry partners and students, and develop a lecture series HTA101 explaining horizon scanning and how to use evidence to inform decisions on new drug listing. “I hope our work will help Hong Kong catch up in using real-world evidence and AI to identify newly available drugs for quicker and better decision-making,” she said. Illustration by Midjourney This image was created using the prompts AI for medicine, and using AI to identify newly available drug. COVER STORY HKU BULLETIN | NOV 2023 10 11
Illustration by Adobe Firefly This image was created using the prompts a thinking robot, evaluating new invention, and checklist. How does one determine if an invention is truly innovative, worthy of a patent? Or, say, a song is truly original, worthy of copyright protection? The answer is not very straightforward, says Dr Ryan Whalen of the Faculty of Law, who is using large language learning models to try to improve that decision. “These are really difficult doctrinal questions to answer that we historically have used pretty hazy legal tests to answer,” he said. “The jurists have built up language and multi-prong tests to determine this, but frankly, a lot of scholars and practitioners think this comes down to gut feelings. “If you’re a patent examiner, you’re reading a specification of an invention and comparing it to the sum total of human knowledge that preceded it, and asking yourself, is this obviously a new step or not? It’s a difficult question to answer with any degree of certainty and objectivity.” “At this point, it’s much more appropriate to use AI tools to aid human decisionmaking rather than be a kind of automated decision-making machine.” Dr Ryan Whalen TESTING INNOVATION AI could determine how innovative an invention is. It may one day also have the potential to create new inventions itself. Dr Ryan Whalen has been looking at the ramifications for the law and innovation policy. This is where data and AI can come in. Dr Whalen has developed a model that can study the natural language of a patent application and compare it to other patents and scientific publications to get a more explicit signal of its originality. He layers that on top of the social network of the inventor to see whether the patent filer’s invention is non-obvious compared to those of his contacts (non-obviousness makes it a candidate for a patent). The intention is to provide jurists with more information. “At this point, it’s much more appropriate to use AI tools to aid human decision-making rather than be a kind of automated decisionmaking machine. They provide another signal that might help patent judges make a really tough decision,” he said. Networks and innovation In China, interestingly, the question of originality is often left to private operators. Alibaba has a platform to compare new products being uploaded with existing ones, to see if there is any infringement of intellectual property (IP). This is faster and more efficient than going through the courts, but it does raise questions about allowing private corporations to be in charge of granting and arbitrating IP rights, he said. Dr Whalen has also applied his model to innovation policy by looking at collaborative research and the backgrounds of the collaborators, including their disciplines and previous outputs. He and his team find that while the most popular collaborations are between people of similar disciplines working on a similar problem, followed by people of very different disciplines working on a problem that neither of them has worked on before, they do not produce particularly high-impact results. “The best type of collaboration we identify is where there is a substantial difference between the collaborators, so they work in different fields but they choose to work on a topic that’s somewhat in the middle between them. These types of collaborations are highly under-represented in the empirical data, but they are quite successful. “One of the conclusions we draw from that is when you’re developing innovation policies and looking at funding at the university or grants level, you might want to take this topographic reality into consideration to guide things like academic hiring and funding disbursements,” he said. Teething issues AI is not only an assessment tool, though. It may also one day create innovations itself – in which case, to whom would a patent or copyright be granted? The issue has excited his students, but Dr Whalen believes it is a niche problem that will not exist for the foreseeable future. First, because humans are inevitably involved in terms of directing AI to solve certain problems. And second, if AI did take over product development, it would undermine the profit incentive that underpins patents and motivates developers to bring their products to market. Even if – when – AI is able to invent things, “it can be remedied quite easily from a legislative standpoint. You just tweak the inventorship definition to include artificial intelligence but grant the patent to the operator of the AI – the person who used it,” he said. Dr Whalen said it was important to bear in mind that AI is still going through teething issues. “Right now, it’s basically a highly sophisticated autocomplete machine,” he said. Still, it already has the potential to impact the legal profession by doing tasks that lawyers typically see as low value in terms of income, such as writing wills, or even high value in terms of doing complicated research that generative AI could far more easily conduct and summarise. “For sure there will be resistance to this, but there will also be rewards for agile firms that are able to adopt these new technologies quickly,” he said. COVER STORY HKU BULLETIN | NOV 2023 12 13
played, taught, exchanged, and eventually they would enter a musician’s tune collection. I call this a process of authentication, when a ‘machine folk’ comes to live. After all, authenticity is never fixed but is always in the becoming,” she said. She has also brought a classical Chinese philosophy perspective to her work. “In my work, I have argued that in reaching beyond Western systems of thought, we may open up new perspectives for thinking critically and responsibly about (music) AI.” Dr Rujing Stacy Huang, Presidential Postdoctoral Fellow in Music, is one of the few humanities scholars in the world working at the intersections of music and AI. In 2021, she was recruited to join ‘MUSAiC’, a five-year, EU-funded research project based in Sweden, to investigate the critical issues arising from AI’s disruption of music – the only humanities scholar in a team of computer scientists and engineers. Since then, she has engaged in countless conversations with different stakeholders on the perceived impact of AI over the creative fields. A computer scientist recently told her they did not see why their children should learn to play any musical instrument if AI could complete the task for them. They excitedly related how machine learning would soon be generating all the music humans need and in unbeatable quality. “What is striking in this statement is that it prioritises the function of music as a commodity and a sheer data output, while erasing the entire embodied, aesthetic experience of music,” she said. But that doesn’t mean there is no place for AI in music. “At the extremes A CHALLENGE FOR CREATIVES “We’re interested in discovering how AI can meaningfully push the boundaries of creative expression.” Dr Rujing Stacy Huang With the help of AI, you can press a button and, voila, create a new song in seconds. That technology is a promise to some, but a nightmare for others. Musicologist Dr Rujing Stacy Huang has been exploring this dilemma. Recently, Universal Music Group urged Spotify to remove a song where AI was used to ‘deep-fake’ the vocals of Drake and The Weeknd. “This incident points to the ageold question about what in music is copyrightable. Should the human singing voice be subject to copyright protection? And, besides the question of legality, is unconsented AI voice cloning ethical?” she asked. Human-AI partnership Also intertwined is the issue of musical labour. “Is virtuosic skill as conventionally conceived still vital when defining the next generation of working ‘artists’? Can art be generated instantly and with minimal effort? Consider songwriting via the press of one button.” But Dr Huang also cautioned against oversimplifying the power of AI: “When deployed with nuance, AI tools can unleash previously unimaginable, creative possibilities.” In 2021 and 2022, Dr Huang coorganised the AI Song Contest (AISC), an annual, international competition exploring human-AI partnership in songwriting. Besides the song, entrants must submit a four-page ‘process document’ and teams typically consist of both musicians and scientists. The contest has received coverage from The New York Times, Science, Scientific American, Billboard, BBC etc. In Spring 2023, via a partnership with Sony, AISC launched its inaugural Artist Residency at the Sony Computer Science Laboratories in Paris. Dr Huang currently sits on the board of directors for the AISC Foundation, a non-profit entity based in the Netherlands. “As a process-driven contest, we’re interested in discovering how AI can meaningfully push the boundaries of creative expression,” she said. Dr Huang, who was recently invited to share her research with Google’s Montreal Office and the Google Magenta team, stressed the urgency of bridging technological development with critical, humanistic inquiry. She sees HKU as a place where this has started to happen, through initiatives such as the new Bachelor of Arts in Humanities and Digital Technologies, and the AI & Humanity Lab. “You’d be surprised at how rare this is. It is not an easy bridge to build, but it’s worth the hard work.” Dr Huang (second from right) was a panellist at the AI and Music Industry Forum organised by IFPI Sweden (International Federation of the Phonographic Industry) and STIM (Swedish Performing Rights Society) in Sweden in May 2023. we hear two opposing views around creative AI. One is that it makes everyone an artist, and this is the best era to be a creative; the other is that AI signifies the end of art, and the death of the true ‘artist’.” Rather than choosing sides, Dr Huang is more interested in exploring the fundamental questions and assumptions embedded in such narratives. Specifically, she has been considering questions relating to ethics, authenticity, artistic skill and de-skilling, and more generally the shifting nature of creative work. ‘Machine folk’ On authenticity, for instance, her collaborator Associate Professor Bob Sturm created an algorithm that can generate tunes in the style of Irish traditional music. On the surface, such ‘machine folk’ may seem to diminish authenticity by bypassing the human creator (the ‘folk’) and their community, story, culture, history and identity – elements most essential in defining folk music. But not quite, said Dr Huang. “In this case, Sturm would share some of the generated tunes with the traditional musicians’ community, including his Irish accordion teachers. The tunes would then be practised, Illustration by Adobe Firefly This image was created using the prompts AI programme generating music, musical notes, and moving melody. COVER STORY HKU BULLETIN | NOV 2023 14 15
TEAMWORK AT THE NANOSCALE Dr Tang Jinyao of the Department of Chemistry is showing that nanoparticles created by humans can behave as communities – much like bees and ants – to achieve more as a unit than individually. Nanoparticles are programmed microscopic particles that can respond to external stimuli such as light to perform functions, such as moving towards a chemical target or light source. The hope is that they can be harnessed for energy saving, drug delivery and other purposes. But they have a limitation: they are typically programmed as individual actors. Put a vast amount of them in a solution and their behaviour becomes unpredictable. Dr Tang Jinyao has addressed this problem by treating nanorobots as a group rather than individuals. This follows on from his 2016 achievement developing the world’s first light-seeking synthetic nanorobot, which is about the size of a blood cell and is propelled by light through photochemical reactions. A lot of hope was pinned to that discovery and its potential to deliver drugs and treatments for healthcare. But Dr Tang and his team found the individual nanoparticles were too small to be programmed for complex actions. “However, we started to realise they could actually be a community. Like individual ants or bees in nature, they do not do much on their own. But as a group, they start to perform very smart functions. Materials science takes the same approach. We treat matter as a group of molecules or atoms and try to figure out the properties. In this case, we started to look at how our particles or nanorobots interacted with each other to see if some kind of intelligence or functions could emerge,” he said. Like oil in water Further studies showed that when a large number of light-seeking nanorobots were placed in a solution, the nanorobots behaved as a random mixture. But when different organic dye molecules were attached to the nanorobots, either red, blue or yellow, and a red light was shone on the solution, the red nanorobots absorbed the light and interacted with each other to dye the solution red – “like oil molecules in water attracting each other,” he said. To Dr Tang this offered a new insight – that collectively, nanoparticles are like a material rather than individual units. Moreover, he showed they are active particles, not dead inert matter which is what most materials are made of. “This is basically the first example of how we can use active things for material applications. It is just one example and I expect more will emerge,” he said. “This is basically the first example of how we can use active things for material applications. It is just one example and I expect more will emerge.” Dr Tang Jinyao Colour changes to a picture caused by pigment particles rearrangement after exposure to light. Possible applications could be using the nanoparticles to create electronic ink that reflects colour (similar to a Kindle, which only displays black and white) or produce camouflage or colourshifting materials, such as a shirt that turns white under sunlight and black in the dark, or a building that changes colour to absorb sunlight when it is cold and repel it in the heat. He noted that while this was not intelligence per se, he also showed in other circumstances that a simple group ‘intelligence’ can emerge when programmed nanoparticles work together to make decisions in a complicated environment. That work is supported by a Croucher Senior Research Fellowship. Following the leader The basic interaction still involves nanoparticles attracting or repelling each other but they are programmed to follow each other’s lead. Each particle can only sense its local environment, but when there are a lot of them, they propagate information through their interaction chains and come to a consensus on which direction to go in. An example of how this might work beyond the experimental lab is with cancerous tumours, which give off weak signals. The nanoparticles ostensibly could be programmed to sense the signal, with those closest to the tumour sensing the tumour first. They would move towards the tumour, attracting other particles, which would build up to a consensus for all other particles to follow. However, Dr Tang cautioned that tumour targeting is still only an aspiration and more work is needed. “Other researchers have not really realised the importance of the interaction between different particles, so we want to at least develop several theories or protocols on programming these groups,” he said. “One way of doing that is to study them as a material because there are techniques in materials science that we could apply. And of course, we want to develop new techniques.” Dr Tang noted that communication between nanoparticles was what linked his two lines of study. He hopes in future to uncover other properties of active matters and intelligent matters and find useful applications, though these will not likely be in the biomedical field at this stage given the regulations and complexities involved. “There’s no product yet but we want to make the first one. Hopefully, others will follow and there will be more investment in this research,” he said. Novel ink composed of colourful microbeads adapts to the appearance of received light by lightdriven separation. RESEARCH HKU BULLETIN | NOV 2023 16 17
The vehicle’s full name is Powered-flying Ultraunderactuated LiDAR (light detection and ranging) Sensing Aerial Robot (PULSAR), and the development team describe it as an agile and self-rotating UAV whose three-dimensional position is fully controlled by actuating only one motor to obtain the required thrust and moments. Mr Nan Chen, PhD candidate of Mechanical Engineering and lead author of the paper which has been published in Science Robotics said: “The greatest advantage of UAVs lies in their ability to break free from the constraints of terrain, swiftly reaching places that are hard for humans to access, and providing efficient real-time environmental observation. UAVs are used in various applications, such as aerial photography, express transportation, search and rescue, and building mapping. There is a lot of potential in the industry for multiple further applications.” The team, from the Mechatronics and Robotic Systems (MaRS) Laboratory led by Dr Fu Zhang, have been working on UAVs since 2018, aiming to achieve a simple and reliable structure design and fully autonomous navigation. Autonomous UAVs typically have visual sensors to perceive obstacles and explore environments, but their perception capability is limited by a small sensor FoV. The MaRS team solved this problem by leveraging selfrotation to extend the sensor FoV without consuming extra power. “Before PULSAR, we already had the Gemini II, a compact and efficient dual-rotor UAV with a servoless design,” said Mr Chen. “While, theoretically only one actuator is enough to realise a powered flight, in fact, when using one motor, the counter-torque of the motor cannot be fully counteracted and must result in an uncontrollable self-rotation of the UAV body.” Self-rotation motion The team first asked themselves how a self-rotation motion could be utilised. The question they wanted to answer was: since the FoV of LiDAR often tends to limit the efficiency of 3D reconstruction of environments, wouldn’t combining self-rotation motion with LiDAR enable scanning for a larger FoV? “Therefore, we designed PULSAR whose revolution derives from the facts that fully autonomous flight is achieved with a minimal number of actuators, plus the self-rotation motion very inherently leads to a larger FoV of LiDAR,” said Mr Chen. “While the self-rotation motion can extend the LiDAR FoV, the firm support of an efficient and robust algorithm of LiDAR localisation is indispensable, because the high-speed rotation movement is a substantial challenge for such algorithms,” he added. “The LiDAR localisation algorithm employed by PULSAR is FAST-LIO2 – which is an efficient LiDAR-inertia odometry also developed by our MaRS laboratory. Notably, FAST-LIO2 demonstrates remarkable robustness to the aggressive motion. In the future, we plan to use UAVs like PULSAR as a platform to explore more advanced and more robust LiDAR localisation algorithms.” In addition to the extended FoV, the use of a single actuator has other advantages, as it reduces the energy conversion loss of the propulsion system during flights, meaning that PULSAR consumes 26.7 per cent less power than widely used quadrotor UAVs with a similar propeller area, while retaining a good level of agility. The onboard LiDAR sensor enables the PULSAR’s ability to perform autonomous navigation in unknown environments, and to detect obstacles – whether static or dynamic – in panoramic views without any external instruments. The team have been experimenting with the PULSARs in environment exploration and multidirectional dynamic obstacle avoidance with the extended FoV via selfrotation, which could lead to increased perception capability, task efficiency, and flight safety. Intricate systems Asked about his own particular fascination for UAVs, Mr Chen said: “The applications for UAVs are undoubtedly multiple and “Uncrewed aerial vehicles are used in various applications, such as aerial photography, express transportation, search and rescue, and building mapping. There is a lot of potential in the industry for multiple further applications.” Mr Nan Chen A team from the Department of Mechanical Engineering have developed PULSAR, a new uncrewed aerial vehicle (UAV), which is revolutionary for its single actuation, flight efficiency, and fully autonomous navigation with an extended sensor field of view (FoV) via self-rotation. exciting, and at the same time they are also intricate systems that demand a compact structure to reduce overall weight, an efficient propulsion system to extend flight duration, stable and reliable control, mapping, and planning methods to ensure flight safety, and efficient algorithms to accommodate limited onboard computing resources. In these aspects, UAVs still have many issues that are worth exploring and solving. “Our next step will be to combine the characteristics of both selfrotation and single-actuation with the advantages of fixed-wing UAVs, for supporting long-distance environmental observation tasks,” he added, “because while PULSAR has higher efficiency than most quadrotor UAVs, it is still hard to compare with the fixed-wing UAVs in the aspects of flight efficiency and flight speed.” Watch PULSAR in action HKU BULLETIN | NOV 2023 18 19 RESEARCH
The existence of dark matter (DM), the mysterious substance that makes up 80 per cent of the universe’s mass, was first reliably inferred in the 1930s, although not fully recognised until the 1970s. What particle makes up DM, however, remains an enigma. For decades, physicists have hypothesised that weakly interacting massive particles (WIMPs) are the strongest candidate, but laboratory experiments have failed to find evidence to back the theory. The new study adds to the increasing evidence against WIMPs that has been gathered from astronomical observations. “Our findings make way for a new paradigm where ultralight DM particles are strong contenders for dark matter, deserving the full weight of scrutiny as has been paid to WIMPs,” said Mr Alfred Amruth, lead author of the study. Dr Jeremy Lim, Associate Professor in the Department of Physics and Mr Amruth’s supervisor on the research, said: “At present, the Standard Model does not allow for any particle having the properties of dark matter. There are many theoretical extensions to the Standard Model, predicting particles over a wide range of masses – from ultralight particles (such as the ones “That was something we were very happy about,” said Mr Amruth, who has just completed his PhD defense and is remaining at HKU on a Dissertation Year Fellowship. He first came to HKU from Sri Lanka in 2013 as an undergraduate, and for those 10 years, Dr Lim has been his research supervisor. The two of them collaborated on this work with Professor Thomas Broadhurst, Ikerbasque Research Professor at Spain’s University of the Basque Country; Professor George Smoot, a Nobel Laureate in Physics from the Hong Kong University of Science and Technology; and Dr Razieh Emami, Research Associate at the Center for Astrophysics, Harvard & Smithsonian. “Our demonstration that ultralight particles are a strong candidate for DM will now help mobilise further research (theoretical and observational) into ultralight DM to assess whether it truly is a correct description,” said Mr Amruth. “The possible existence of such ultralight particles was first proposed by String Theory, and this particle is capable of resolving the astrophysical problems faced by massive DM. Much research has already been spurred by our work showing ultralight DM’s ability to resolve the long-standing lensing anomalies in astronomy.” A study has provided the most direct evidence yet that dark matter does not constitute ultra-massive particles, as is commonly thought, but instead is made up of particles so light that they travel through space like waves. “Much research has already been spurred by our work showing ultralight dark matter’s ability to resolve the long-standing lensing anomalies in astronomy.” Mr Alfred Amruth SHEDDING LIGHT ON THE DARK Gravitational lensing The evidence the team use in particular, arises from the phenomenon of gravitational lensing (courtesy of Einstein’s general theory of relativity), where anything with mass can bend the path of light. “Over the past two decades, when we looked at astronomical observations of quasars (the very bright nuclei of galaxies constituting visible evidence for vigorous accretion on to their central supermassive black holes) which are lensed by a foreground galaxy, it was typically difficult to reproduce the observed positions and brightnesses (also known as lensing anomalies) of the lensed galaxies if one used a massive particle DM model,” explained Mr Amruth. “However, when one uses ultralight DM, we can resolve these lensing anomalies and reproduce the observations of the lensed galaxies. Our motivation was the long-standing problem of lensing anomalies, as well as the fact that no one has attempted to calculate the lensing properties of ultralight DM before. We take these together, in the spirit of the scientific method, and make theoretical predictions which can be compared with the observations of lensed galaxies.” Their theory has already inspired more research papers which further investigate the possible mass range of ultralight DM particles from the perspective of particle physics experiments. “There has also been research in astronomy looking at observations of lensed galaxy clusters to place further constraints on the mass of the ultralight DM particle,” said Mr Amruth. “In addition, we are currently working on a follow-up paper which finds that ultralight DM can resolve lensing anomalies in the first type-1a lensed supernova [a type of supernova that occurs due to an accreting white dwarf approaching a mass of 1.4 times that of the Sun].” These 3D renderings illustrate differences in gravitational lensing for the case of heavy particle dark matter (left) and ultralight dark matter (right). The dashed lines indicate where two lensed images (of the same background quasar) would form, enabling the researchers to observe them via telescope. Asked about his initial interest in astrophysics, Mr Amruth said: “I’ve always been very passionate about the universe and how it works – I focussed on research about DM since it composes the majority of mass in our universe and we still don’t know what it is! This is of fundamental importance since DM, much like fire and electricity, has the potential to be the next discovery which propels humanity into a spacefaring civilisation! Ultimately, what this is all about, is to identify the path to new physics, which will be an incredible revolution for modern science.” we infer from our study) to ultramassive particles. The question is which of these theoretical extensions are correct: if we can identify the correct theoretical extension, then we would know the correct path towards new physics.” The findings have attracted a great deal of interest, an example being the study’s selection as the cover story of Nature Astronomy earlier this year. Illustration generated by AI depicting complex caustic patterns due to gravitational lensing by ultralight dark matter. (Courtesy of Chamoth Weerasinghe) HKU BULLETIN | NOV 2023 20 21 RESEARCH
RkJQdWJsaXNoZXIy ODI4MTQ=