News
Making AI more open could accelerate research and tech transfer
Open science and AI could be a powerful combination
During the 6 June event, speakers discussed how AI and open science could each accelerate scientific progress. For Anna L. Nsubuga, UK Ambassador to UNESCO, open science has the potential to 鈥渄rive economic growth, tackle shared global challenges and promote rigour and integrity in the global research system鈥.
At the same time, 鈥渙ur collective thriving research ecosystems generate significant volumes of data鈥, she observed. AI could be used to 鈥渦nlock endless possibilities鈥, such as 鈥渋mproved medical diagnostics, enhanced drug discovery and the design of novel materials with unique properties鈥.
AI has already been instrumental in achieving scientific milestones, such as by predicting protein structures and facilitating fully autonomous research. It can play a transformative role at many stages of the scientific process, from the design of experiments to the analysis of large sets of data 鈥 data that are often made accessible because of open science.
The intersection of open science and AI could enable tech transfer and the 鈥渆mergence of new avenues of research鈥, observed Dr Laura-Joy Boulos, Associate Professor at Saint Joseph University of Beirut and a L'Or茅al-UNESCO International Rising Talent in 2020. This combination could also provide 鈥渁ccess to information that is across disciplines, across regions and across languages鈥, remarked Dr Dureen Samander Eweis, Science Officer at the Centre for Science Futures of the International Science Council.
For its part, UNESCO is contributing to the development of responsible AI-driven research through its Abdus Salam International Centre of Theoretical Physics (ICTP). The ICTP is a founding member of the , a group of developers and researchers from international research institutes and high-tech companies spearheaded by IBM and Meta that have been working together since December to accelerate the adoption of open AI.
0n 27 May, the ICTP and IBM Research Europe and Africa to scientists or research teams having made a major contribution to theory, algorithms or applications related to an open approach to AI.
When UNESCO member states unanimously adopted the UNESCO Recommendation on Open Science in November 2021, they had fresh evidence before them of the effectiveness of an open approach to science: over the preceding 20 months, they had witnessed how the development of a life-saving vaccine had been accelerated by the open sharing of the virus鈥檚 genome and other research findings.
Thanks to this Recommendation, we now have both a globally agreed definition and clear principles for open science. UNESCO also published open data guidelines in 2023.
Moreover, UNESCO programmes are practicing what they preach. For instance, in 2017, UNESCO鈥檚 Intergovernmental Hydrological Programme launched the Water Information Network System (WINS), an open-access, participatory platform for sharing, accessing and visualizing water鈥憆elated information, as well as for connecting water stakeholders. Based on a tool equipped with a geographical information system, WINS allows people to store, access and create tailored maps on water at all levels.
Lack of transparency in AI may threaten the credibility of AI-driven science
Despite its promise, AI presents obstacles both to open science and to the replicability, equity and trustworthiness of AI-driven scientific innovation. For example, the fact that the development of artificial intelligence is dominated by private companies means that scientists tend to be incentivised against following open science practices like that of openly publishing how the algorithms work and the training data they used.
As Denisse Albornoz, Senior Policy Adviser in the Royal Society鈥檚 Data and Digital Technologies team, put it, 鈥渄eep learning is opaque by nature. If, on top of that these models are proprietary, we cannot evaluate them or scrutinise them and understand, for example, how representative they are鈥.
Misleading outcomes can arise from 鈥渋ncomplete, incorrect or unrepresentative datasets鈥, added Professor Alison Noble, Technikos Professor of Biomedical Engineering at the University of Oxford and chair of the Royal Society鈥檚 Science in the Age of AI Working Group, 鈥減osing potential harm in high stake fields like medicine鈥.
According to Ms Albornoz, this could also lead to 鈥渢he private sector shaping and defining the research agenda鈥 and also 鈥渃reating dependencies on the infrastructure鈥 they provide.
Worse, as Prof. Noble made clear, 鈥渢he lack of transparency also creates challenges for reproducibility, one of the key characteristics of trusted research鈥.
These challenges are set to have a much greater impact as science becomes increasingly reliant on AI. To tackle them, Ms Nsubuga suggested that 鈥渨e must insist that AI-based research meet open science principles and practices鈥.
Watch the symposium
Incentives can help to mainstream open AI
Prof. Noble emphasised the need for incentives to promote open science practices in AI-driven research. Public鈥損rivate and cross-disciplinary collaboration could help with this, while also enhancing the quality of research and creating more accurate models.
When collaborating with private actors, Laura-Joy Boulos recommended focusing on venture capitalists, due to their flexibility, and mentioned that obtaining high-quality data 鈥渋s something that researchers can do better than industrials and something we should capitalise on. The research community could be a partner that supplies good data鈥, she suggested and, in return, the private sector could follow the research community鈥檚 guidelines.
Albornoz spoke of the need to make sure that research funders 鈥渓ower the pressure to make everything AI-ready and everything AI-specific鈥, since not all science benefits from AI.
For her part, Noble advocated developing AI models that 鈥渞equire less energy鈥 to address the environmental impact of AI tools.
No need to trust AI blindly
To help different communities understand better, benefit from and trust AI-driven science without having to 鈥渢rust it blindly鈥, Ms Albornoz recommended making AI more explainable. Dr Boulos took up this idea by suggesting that one invest in projects 鈥渢hat can build explanation interfaces鈥 for AI tools so that it is 鈥渘ot just experts talking to the AI but actually users鈥.
As Ms Albornoz said, 鈥渕eaningful access to essential AI infrastructure鈥 also means 鈥渄eveloping the right skills鈥 and even 鈥渃reating opportunities to make sure that diverse scientific communities and diverse perspectives can influence research agendas鈥. In this way, they could become 鈥渃o-designers rather than passive users鈥, Prof. Noble added.
For her part, Dr Eweis stressed the importance of 鈥渟upporting countries鈥 scientific communities in implementing existing frameworks and guidelines鈥 such as the UNESCO Recommendations on Open Science and on the Ethics of Artificial Intelligence.
Given the fast pace at which AI is developing, Dr Boulos suggested that UNESCO continue to provide the 鈥渟pace and the pace鈥 for reflection and analysis. 鈥淲e need to be able to respond fast when we talk about AI safety when anything new comes out, to be able to provide advice,鈥 she said, 鈥渟o that everybody understands some of the issues but also responds together鈥.
UNESCO鈥檚 next step is to continue working with the Royal Society to assemble experts and develop a practical factsheet on this topic.