Hélène Draux Articles - TL;DR - Digital Science https://www.digital-science.com/tldr/people/helene-draux/ Advancing the Research Ecosystem Tue, 11 Jun 2024 09:41:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Fragmentation of AI research: Looking for silos in Applied AI https://www.digital-science.com/tldr/article/fragmentation-of-ai-research-looking-for-silos-in-applied-ai/ Thu, 15 Feb 2024 11:44:58 +0000 https://www.digital-science.com/?post_type=tldr_article&p=69675 The field of Artificial Intelligence is made of six Division level Fields of Research. We explore if these FoRs have evolved together or separately and discuss the consequences of both.

The post Fragmentation of AI research: Looking for silos in Applied AI appeared first on Digital Science.

]]>
Figure 1: DALL•E rendering of “AI siloed research”. Note: This image has been edited because some of the words attached to each silo by DALL•E were gibberish, good illustration of the limitations of such AI apps.

Considering Artificial Intelligence research happens in six different Fields of Research (FoRs) – the basic fields of AI and ML, but also a computational angle of Maths, Chemistry, Biology and Psychology – I wondered in the first post of this series if this would lead to siloed research.

Progress will happen in all of these six fields, so there is a risk that research happens in parallel; findings may not translate broadly if cross-pollination among fields is weak. The present analysis explores the interconnectedness of publications across AI-related Fields of Research over the past decade. Specifically, I constructed a global citation network mapping out how much scholarly papers in one field cite those in other fields. My goal was to identify tendencies towards knowledge fragmentation or collaboration within the AI scholarly landscape.

Overall network

I built a citation network of the FoRs for the 10-year period 2014-2023. Each of the 172 nodes represent a FoR – their links represent the number of times publications in this FoR cites each other. Like in our previous analysis, I considered AI to encompass the following FoRs:

  • 4602 Artificial Intelligence
  • 4611 Machine Learning
  • 3102 Bioinformatics and Computational Biology
  • 4903 Numerical and Computational Mathematics
  • 3407 Theoretical and Computational Chemistry
  • 5204 Cognitive and Computational Psychology

Figure 1 below shows the graph representing the citation network of the main FoRs. I used Gephi to create the network, and made the layout with Yifan Hu Proportional. I kept only the cited FoRs that were cited more than 10%; a few FoRs were excluded from the network as less than 10% of their citations were to other FoRs (4806 Private Law and Civil Obligations, 4801 Commercial Law, and 3602 Creative and Professional Writing). 

The six FoRs relevant to AI are colored while the remaining FoRs are in light grey. The main part of the network in the center is bean shaped and has two short tentacles: at the top Music and Performing Arts, and at the bottom Theology and Religious Studies. These fields of research only cite each other.

Otherwise all remaining FoRs are tightly linked to each other. The largest grey node, Clinical Sciences, is at the center top of the bean shaped network – this is where health related fields are found, while non-health related fields are in the bottom half.

Figure 2: Citation network for AI Fields of Research (FoRs).

AI-related fields

AI, ML and Computational Mathematics

Artificial IntelligenceMachine LearningComputational Maths
Figure 3: Citation network for AI Fields of Research (FoRs) – highlighting AI, ML, and Computational Mathematics.

Based on the citation network, it appears that both basic AI-related fields, AI (dark purple) and ML (pink) are closely related to Computational Mathematics (light purple), with AI being even closer to Computational Mathematics than to ML. Knowledge exchange is therefore important between the three fields, and evolution in the basic fields will be easily applied into Computational Mathematics.

Others

Computational psychologyComputational BiologyComputational Chemistry
Figure 4: Citation network for AI Fields of Research (FoRs) – highlighting Computational Psychology, Biology, and Chemistry.

Publications in the Computational Psychology (green) and in the Computational Biology (orange) fields cite publications in the ML (pink) field, among others. 

However, publications in the Computational Chemistry (blue) field are far removed from ML or AI, since less than 10% of cited publications are in either field. They do cite fields in the Applied Computing field, so knowledge transfer is likely to happen at a slower pace.

Conclusion

The present analysis of citation flows between AI-related fields over the past decade reveals significant integration among core disciplines like AI, ML and Computational Mathematics, but more isolated pockets of research within applied domains including Chemistry and Psychology. While Computation Mathematical approaches are likely to rapidly absorb theoretical advancements, fields still early in adopting AI face more barriers translating recent progress across disciplinary divides. 

Sustained efforts to break down silos through funding, conferences and communication channels can prevent bifurcation of knowledge and instead catalyze creativity across the AI research landscape.

The post Fragmentation of AI research: Looking for silos in Applied AI appeared first on Digital Science.

]]>
Women’s first publications in decline after decades of growth https://www.digital-science.com/tldr/article/womens-first-publications-in-decline-after-decades-of-growth/ Thu, 08 Feb 2024 12:27:46 +0000 https://www.digital-science.com/?post_type=tldr_article&p=69345 For decades, the proportion of women publishing their first academic publication has increased. However, following the COVID-19 pandemic, this growth has gone in reverse for two consecutive years, with 2024 not looking better.

The post Women’s first publications in decline after decades of growth appeared first on Digital Science.

]]>
A generational shift in reverse
DALL•E3 rendering of “an illustration featuring a woman scientist in the lab assisting her child with school work. The scene is warmly lit, creating an inviting and supportive environment that captures a nurturing moment of teaching and learning.”

For decades, societal barriers have gradually lowered to enable more women to publish academic research and pursue a career in research. In 2021, for instance, women even outnumbered men in the reception of Doctoral degrees at US universities (Perry 2021). In some countries, it is even common for a woman to have a child while doing her PhD (the question being how many, rather than if); suggesting that motherhood and an academic career may be compatible providing the right environment.

Despite constant progress, we know that women have been disproportionately affected by the COVID-19 pandemic (Kwon, Yun, and Kang 2023); which had been anticipated a year earlier in a call by researchers to funders and institutions to address the likely fallout  (Davis et al. 2022). So has the pandemic halted or reversed the progress women have made entering academic research?

Global trends from 2000

Kwon et al (2023)’s analysis suggested that mid-career women and those living in less gender equal countries would be the most affected by the COVID-19 pandemic – but what about early career researchers? I set out to investigate how the proportion of women in the first publications had changed over time. 

I started the analysis in 2000, using Dimensions.ai data in Google Big Query (GBQ) and Gender API data, which classifies gender from first names, using their probabilities to be used by a man or a woman. Metadata of research articles have changed since the 2000s; authors and journals at the very start of the period often recorded initials of authors, with women more likely to use them in order to hide their gender. If possible, Dimensions algorithms will have consolidated a profile with publications that had initials only and the first name, making it possible to track it in our analysis, but for women at the start of our period, especially those with a short career, it is possible that their gender is under reported. On the other hand, women would often change their name at marriage / divorce, making it impossible to track their profile if they had published before and therefore creating multiple “first publications”. 

During the year 2000, 4.5 million women published their first academic publication, representing 30.2% of observable researchers. By 2021, the percentage of debut works authored by women had increased by a third, reaching 40.3% researchers (17.6 million of women).  This increase can be attributed to a greater percentage of women starting a research career, and to a much lesser extent, the effect could be intensified by our inability to identify women accurately at the start of the period.

Figure 1. Global percentage of women who published their first publication between 2000 and 2023.

But 2022 and 2023 saw a downward trend, respectively to 39.4% and 37.4%, reaching 2013 values. This emerging downward curve signifies a generational shift in reverse from empowering girls and young women in academia to placing obstacles, intentional or not.. 

We know that in and outside of academia, women in countries around the world were affected disproportionately more than men by the COVID-19, but what about the first publications of those starting in academia?

Selected countries

I selected the 20 countries with the highest number of women publishing their first publication over the period 2000-2023. The interactive graph below shows the trend over time, compared to global data presented earlier; double-click on one country to see only that one and progressively add the ones to be compared to.

Figure 2. Percentage of women who published their first publication between 2000 and 2023 in the top 20 countries by women’s first publications.

I then did a deep dive into three continents: Europe, Asia, and America, where countries placed themselves around this average.

European countries

Figure 3. Percentage of women who published their first publication between 2000 and 2023 in the top 20 countries in Europe by women’s first publications.

Among the selected countries, the following countries were in Europe: Switzerland, Germany, Spain, France, United Kingdom, Italy, Netherlands, Poland, Portugal, Russia, and Sweden. Some European countries (Spain, France, Italy, Netherlands, Poland, Portugal, and Sweden) stayed above or around the average, while the UK, Germany, Switzerland, and Russia* were below average during most of the period. All countries but Portugal observed a downward trend in 2022 and 2023.

* Slavic researchers use their initials more than others, making the determination of their gender more difficult.

Asian countries

Figure 4. Percentage of women who published their first publication between 2000 and 2023 in the top 20 countries in Asia by women’s first publications.

The four Asian countries (China, India and Japan) in the top 20 use non-latin transcripts and in general genders are more difficult to derive from Asian names, therefore our analysis cannot be as generalized as European countries. Nevertheless, for the names that could be analyzed, all Asian countries stayed below the global average over the period, India’s percentages of women’s first publication dipped in 2007. All observed a downward trend in 2022 and 2023. 

American countries

Figure 5. Percentage of women who published their first publication between 2000 and 2023 in the top 20 countries in America by women’s first publications.

On the American continent (Brazil, Canada, Mexico and the US) on the other hand, all countries but the US stayed above the global average for most of the period. Brazil was even above the highest global value (40.3%) for almost the entire period. Mexico fluctuated above the global average until 2013, when it settled around or below the global average until 2022. 

In search of the first grant

Dimensions also holds data on grants, albeit the coverage does not extend as fully as the publication dataset – it contains most established national and international funders but does not include all university and corporate grants. Nevertheless, I thought it might show a similar trend – so less publications would be related to less grants. In some fields, especially scientific fields, it is now common to write a PhD based on publication solely, while in others still write a thesis, and so the first publication would come during the first Postdoc. 

However, the trend observed in the grant dataset was more mixed than the publication dataset – the grant dataset is smaller so it is more erratic, and although there is still an observable global decline of women receiving their first grant in 2022 and 2023, it is not seen in all countries. For instance, in the UK and the US, the proportion of women receiving their first grant has gone respectively flat and up. But it has gone down in the Netherlands (for which the representation of women in first publications was better than average), China and India.

Conclusion

The present analysis has shown that the long-term rise in the proportion of women publishing their first academic work has unexpectedly peaked in 2021, followed by two consecutive years of decline. The decline coincides with the COVID-19 pandemic and its immediate aftermath, which has affected society in general, but women more markedly. 

I see two possible reasons for the decline of first publications: the decrease of less established grants, those that are not tracked by Dimensions (university or under another larger project) and / or the two-body problem. In the two-body problem two researchers attempt to find a place in the same university, which is often solved by one partner taking a lesser position in the university with more teaching than research, or lots of travel, which became harder during the pandemic. 

However, if the trend persists, there is likely an on-going crisis and the change of trend since 2021 should be taken seriously. Data for 2024 showed a persistent decline but was not included in the analysis as the year has barely started (although in terms of publishing data, 2024 started in October 2023 when 2024 started appearing in Dimensions and we already have 200,000 women’s first publication in Dimensions).

The present analysis of the reversal should serve as a call-to-action for funders and institutions to support and retain women in research through and beyond times of crisis.


Bibliography

Davis, Pamela B., Emma A. Meagher, Claire Pomeroy, William L. Lowe, Arthur H. Rubenstein, Joy Y. Wu, Anne B. Curtis, and Rebecca D. Jackson. 2022. ‘Pandemic-Related Barriers to the Success of Women in Research: A Framework for Action’. Nature Medicine 28 (3): 436–38.
https://doi.org/10.1038/s41591-022-01692-8

Kwon, Eunrang, Jinhyuk Yun, and Jeong-han Kang. 2023. ‘The Effect of the COVID-19 Pandemic on Gendered Research Productivity and Its Correlates’. Journal of Informetrics 17 (1): 101380.
https://doi.org/10.1016/j.joi.2023.101380

Perry, Mark. 2021. ‘Women Earned the Majority of Doctoral Degrees in 2020 for the 12th Straight Year and Outnumber Men in Grad School 148 to 100’. American Enterprise Institute – AEI (blog). 14 October 2021.
https://www.aei.org/carpe-diem/women-earned-the-majority-of-doctoral-degrees-in-2020-for-the-12th-straight-year-and-outnumber-men-in-grad-school-148-to-100/

The post Women’s first publications in decline after decades of growth appeared first on Digital Science.

]]>
Research on Artificial Intelligence – the global divides https://www.digital-science.com/tldr/article/research-on-artificial-intelligence-the-global-divides/ Thu, 04 Jan 2024 14:04:54 +0000 https://www.digital-science.com/?post_type=tldr_article&p=68962 There is a large global divide in AI research and development, with the vast majority of research publications and funding coming from the US, China, and EU27.

The post Research on Artificial Intelligence – the global divides appeared first on Digital Science.

]]>
Artificial intelligence (AI) has tremendous potential to transform economies and societies around the world. However, AI research and development remain highly uneven across different countries, regions and organisation types. In this analysis, we explore a few indicators that highlight the growing global divide in AI capabilities.

To define the field of AI for our analysis, we relied on the Fields of Research classification system which comes out of the box in Dimensions. Specifically, using current consultancy work we considered AI research to span the following Fields of Research:

Basic AI Fields:

  • 4602 Artificial Intelligence
  • 4611 Machine Learning

Applied AI Fields:

  • 3102 Bioinformatics and Computational Biology
  • 4903 Numerical and Computational Mathematics
  • 3407 Theoretical and Computational Chemistry
  • 5204 Cognitive and Computational Psychology

Distribution of AI Research Publications

Geographic distribution

Figure 1: Cartogram made online using https://go-cart.io/cartogram based on data from Dimensions.ai for 2012-2023.

This striking cartogram Figure 1 visualises the global landscape of artificial intelligence (AI) research from 2014-2023 based on academic publication outputs from countries around the world. The contours of each landmass depict the number of peer-reviewed papers contributed over the decade across core AI subfields spanning machine learning to computational mathematics.

Mirroring the data, the US dominates the cartogram with a swollen territorial outline reflecting its pole position – American institutes produced over 772,000 papers, accounting for a huge 30% share globally. China remains a major leader at approximately 465,000 papers or 18% share.

Beyond the two giants, moderately sized outlines mark the UK, Germany, Japan and others like India, Brazil and Iran – highlighting established AI research strongholds churning out 10,000 to 140,000 papers each over the period. Meanwhile Africa, South America and most Asian nations appear paltry, indicating largely untapped AI potential with less than a 5% collective contribution.

While North America, East Asia and select countries in Europe and Oceania show depth, the visual makes the divide across the Global South stark – over 80% of the world’s population reside here but with little access to resources critical to unlocking AI capabilities seen elsewhere. Addressing these deep disparities should be a top priority for transitioning towards more equitable and inclusive AI development globally.

To understand the extent of the global divide in the last 12 years, we analysed the trends in the top 10 countries that published AI research, based on Dimensions’ data for 2012-2023. 

Figure 2: Trend in AI research publications in the countries with the top 10 biggest outputs

The EU27 published over 30,000 AI papers annually in the early time period, leading global output. However, China has since surpassed the EU27, rapidly rising from around 13,000 papers in 2012 to become the leading publisher with nearly 60,000 in 2023. The US which started in second position follows in third now, though its publication volume has levelled off in recent years after its initial first ranking. India has experienced major growth, with over 17,000 AI papers in 2023 compared to under 4,000 a decade prior; it even overtook the UK, which started in fourth position. Canada, Australia, Japan, Russia and South Korea maintain top 10 presence, though their year-to-year output has declined mildly in the most recent years.

The data shows emerging economies like China, India and others gaining ground in AI research, reflected by their rapidly growing publication output over the past decade. Though traditional leaders in North America and Europe continue to produce high volumes, developing countries are claiming more seats at the table and diffusing expertise globally. There is still an advantage skewed towards wealthy regions. But the statistics indicate dividing lines blurring as the playing field levels to some extent, allowing more nations to drive progress based on their own interests without barriers previously faced. Whilst gaps remain in scale and infrastructure, the global AI research landscape exhibits increased participation beyond just an exclusive Western bloc.

Balance university vs company research

We have used the Global Research Identifier Database (GRID – a database of educational and research organisations worldwide) typology to understand the distribution of AI publications between public and private research institutions from 2012-2023. GRID distinguishes organisations into categories like Archive, Company, Education, Facility, Government, Healthcare, Nonprofit, and Other.

Across all Fields of Research, the distribution of research organisation types has remained very stable from 2012-2023 across all fields, as Figure 3 shows. Education institutions (universities) produce the most research, followed by Healthcare (hospitals), Facilities (specialised research institutions), Government, Companies, Nonprofits and Archives (libraries, museums).

Figure 3: Distribution of research organisation type between 2012 and 2023, across all research publications in Dimensions.

However, the distribution differs slightly for AI research specifically, as seen in Figure 4. While Education still leads in AI, Facilities overtake Healthcare for the second spot. Companies also publish more AI research than Governments since 2021, having surpassed Nonprofits back in 2018.

Figure 4: Distribution of research organisation type between 2012 and 2023, in AI research publications in Dimensions.

The increase in AI research authored by companies presents some concerns. Primarily, there lies an inherent conflict of interest, as companies have financial incentives to promote their own AI products and services. Their research may present biassed, overtly positive conclusions that omit negative findings, skewing the literature compared to more objective academic work.

In addition, corporate research often lacks transparency about underlying data, methods, and disclosure of limitations. Companies also frequently patent algorithms, data, and innovations based on their published research, restricting access in ways that limit follow-on research progress and collaboration.

Furthermore, the priorities guiding corporate AI research cater more closely to commercial opportunities rather than pure scientific or social value. This means that important basic research with less immediate real-world application is at risk of becoming underfunded. Lastly, the underlying profit motives may override ethical considerations around things like algorithmic bias, privacy, and security – issues that companies have less incentive to study or address.

Global North vs Global South

The following figures – 5a and 5b – show the same data in the Global North (5a) and Global South (5b). Both distributions stayed relatively stable throughout the period; the 2 largest organisation types for both were Education and Facility. However, at the start of the period, Governmental organisations were in third position in the Global South, until 2018 when it was overtaken by Healthcare organisations. Non-profit organisations publish in the Global North but very marginally in the Global South, while Companies publish more in the Global North and have an increasingly important presence in the Global South.

Figure 5a: Distribution of research organisation type in the Global North between 2012 and 2023, in AI research publications in Dimensions
Figure 5b: Distribution of research organisation type in the Global South between 2012 and 2023, in AI research publications in Dimensions.

Funding trends

Geographic trends

Dimensions contains data about research fundings publicly available – either from the public or the private sector. We find that the dominance of China and the US in AI research output correlates strongly to funding trends. 

The following figure shows global AI research funding trends from 2012 to 2020 as found in Dimensions. Analysis of the number of AI research grants funded between 2012-2020 revealed rapidly rising investment – 58.3%, while global funding across all fields of research has grown only 55.8% – across advanced and emerging economies. 

The data shows the United States maintains clear dominance, increasing grants from 10,210 to 11,467 over the period examined. But Japan and China posted larger overall gains as percentages to secure second and third place in total grants by 2020. Japan more than doubled funding, granting 3,755 in 2012 swelling to 6,465 in 2020. China’s trajectory stands out, going from 4,267 to 7,345 to surpass the European Union bloc’s relatively flat numbers. Notably, the UK doubled their numbers from 2012 to 2020. 

While the order shuffled, the top 10 country funders collectively ramped up research commitments substantially. Growing global recognition of AI’s revolutionary potential across practically all major industries drove heavier funding year after year from both traditional leaders and chasing contenders. If the world maintained this pace, analysts forecast total AI research funding could double again before 2030.

Figure 6: AI research funding growth from 2012 to 2020 in the top 10 countries/regions with most grants

The data shows Brazil was consistently one of the top 10 countries providing AI research grants over the past decade, ranking 7th among those listed. As the only non-Western/European country besides China to make the top 10, Brazil’s presence highlights broader global interest and investments in AI research beyond just dominant Western regions. Though its number of AI grants is not as high as leaders like the US and EU27, the fact that Brazil has sustained over 1,200 grants annually signifies meaningful national support and participation in this research area among emerging economies. Brazil maintaining its 7th place ranking throughout the 9-year period reflects persistent research activity and funding commitments rather than being overtaken by other developing nations.

Conclusion

There are stark global divides in artificial intelligence (AI) research and development, with both publications and grants heavily concentrated in a few advanced countries over the past decade. The landscape of AI papers is dominated by the US and China, which have produced around 50% of all publications. A similar pattern emerges in AI grants, with the US and China also providing the most funding annually. A handful of other developed nations like the UK, Germany, and Japan have established strongholds in both metrics, collectively accounting for another substantial portion of global research outputs and total grants. However, the vast majority of the developing world – encompassing most countries across Africa, South America, and Asia – contributes under 5% in both metrics, significantly lagging in critical resources for progressing AI research.

While China has rapidly grown to lead AI publications, surpassing the EU, it still trails second behind the US in terms of number of AI grants funded. Emerging economies like India and Brazil are making consistent gains increasing research participation and grants provided. But their output and funding levels are still below AI research leaders. The data highlights how a handful of advanced Western countries and China continue to vastly outpace most other nations in both key AI research metrics. Though divides show some signs of blurring, deep funding and infrastructure disparities persist globally.

The post Research on Artificial Intelligence – the global divides appeared first on Digital Science.

]]>
Fragmentation of AI research: a blog series https://www.digital-science.com/tldr/article/fragmentation-of-ai-research-a-blog-series/ Thu, 07 Dec 2023 14:35:19 +0000 https://www.digital-science.com/?post_type=tldr_article&p=68718 AI research has become fragmented across disciplines, geography, and policy. Specialised subfields rarely collaborate, limiting spread of innovations from one area to others. Concentration in high-income countries also excludes global perspectives while policies created in AI hubs may not transfer. Government regulations remain disjointed as well. In 2022 most countries lacked AI strategies, with existing policies conflicting across jurisdictions, ranging from promoting competitiveness to ethics. Overall this disciplinary, geographic, and policy division hampers coordination across all of AI.

The post Fragmentation of AI research: a blog series appeared first on Digital Science.

]]>

In this blog series, we will explore the Fragmentation of Artificial Intelligence research. This first post lays out some of the key areas where AI research and development have become disconnected, making it more difficult to advance the field in a coordinated, ethical, and globally beneficial manner.

Figure 1: Created with DALL·E 3 with the prompt: “AI research subfields (icons representing: robotics, ML, NLP, Automatic Speech Recognition, Computer Vision, ethics, Deep learning) are each represented by a piece of puzzle scattered around.”

Artificial Intelligence (AI) is a recent discipline, having started in the 1960s, which aims at mimicking the cognitive abilities of humans. After going through a few “winters of AI” in the 70s and 90s, the field has been experiencing a boom since the 2010s thanks to increased computing capacities and large data availability.

The interdisciplinary foundations of AI draw from diverse fields across the sciences, technology, engineering, mathematics, and humanities. Core STEM disciplines like mathematics, computer science, linguistics, psychology, neuroscience, and philosophy provide vital technical capabilities, cognitive models, and ethical perspectives. Meanwhile, non-STEM fields including ethics, law, sociology, and anthropology inform AI’s societal impacts and governance. Together, this multidisciplinary collaborative approach aspires to enable AI systems that not only perform complex tasks, but do so in a way that accounts for broader human needs and societal impacts. However, significant challenges remain in developing AI that is compatible with or directed towards human values and the public interest. Continued effort is needed to ensure AI’s development and deployment serve to benefit humanity as a whole rather than exacerbate existing biases, inequities, and risks.

Global Divides

Figure 2. Created with DALL·E 3 with the prompt: “researchers with a flag from the world on their clothes. They work on platforms at different levels. Some are isolated and cannot work with the others.” Ironically the USA flag being such a common flag, it is the most used by Dall·e 3 (when asked to have lower and higher income country flags, it made some flags up).

Research is globally divided – the high income countries in particular are the biggest publisher of peer-reviewed publications and the biggest attendee group at research conferences. This is especially true in AI research, with AI researchers from poorer countries moving to hubs like Silicon Valley. This is, in part due to the lack of cyber infrastructure in many countries (GPU, electricity reliability, storage capacity, and so on), but also for countries in the non-English speaking world there may be a lack of, to data availability in their native language. 

The concentration of AI research in high-income countries has multiple concerning consequences.: First, it prioritizses issues most relevant to high income countries while overlooking applications that could benefit lower income countries (e.g. iImproving access to basic needs, such as clean water and food production; diagnosis and treatment of diseases more prevalent in low-income regions). Second, the lack of diversity among AI researchers excludes valuable perspectives from underrepresented groups including non-Westerners, women, and minorities. Policies and ethics guidelines emerging from the active regions may not transfer well or align across borders.

In a third blog post of this series, we will investigate the global division of AI research, and look into the possible solutions. 

Siloed knowledge

Figure 3: Created with DALL·E 3 with the prompt: “separate, isolated compartments, each representing a specialised area of AI research, like computer vision, natural language processing, and robotics. In these compartments, researchers work on their respective pieces of the AI puzzle. However, these compartments are solid and tall, making it challenging for researchers to collaborate or see what’s happening in other areas”. As expected researchers are white males. 

However, in recent years research in AI has become so specialised that it is difficult to see where AI starts and ends. A great example of this is the fact that many AI-related considered research publications are actually not classified as “Artificial Intelligence” in Dimensions. Take the AlphaFold publications, these are considered Bioinformatics and Computational Biology, rather than Artificial Intelligence. Many consider Machine Learning to be a subfield of Artificial Intelligence, however the Fields of Research separates both and puts them at the same level.

Figure 4: co-authorship network of AlphaFold publications.

As AI research spreads to different fields, progress is more difficult to spread – researchers in different disciplines rarely organise conferences together, most journals are specialised into one field of research, researchers’ physical departments in universities are spread across buildings, and therefore there is less collaboration between them. Any progress such as thatprogress required to make AI more ethical, is less likely to spread evenly to every applied AI field. For instance, transparency in AI, which is still in infancy and developedhappened thanks to collaboration between ethics and AI, will take more time to reach AI applied in Physics, Chemistry, and so on. 

Do the benefits of AI application in other research fields outweigh the difficulties in applying AI advancements? And how much interdisciplinary actually happens? This will be the inquiry of our second blog post of this series.

Policy framework

Figure 5: Created with DALL·E 3 with the prompt: “The picture is divided in 10 sectors. In 6 sectors robots are happily playing but in other sectors the robots look sad and are behind bars”

Globally, government policies and regulations regarding the development and use of increasingly powerful large language models (LLMs) remain fragmented. Some countries have outright banned certain LLMs, while others have taken no regulatory action, allowing unrestricted LLM progress. There is currently no international framework or agreement on AI governance; efforts like the  Global Partnership on Artificial Intelligence (GPAI) aims to provide policy recommendations and best practices related to AI, which can inform the development of AI regulations and standards at the national and international levels. It tackles issues related to privacy, bias, discrimination, and transparency in AI systems; promotes ethical growth development, and encourages collaboration and information sharing.

AI policies vary widely across national governments. OIn 2022, out of 285 countries in 2022, just 62 (22.2%) countries had a national artificial intelligence strategy, seven7 (2.5%) were in progress and 209 (73.3%) had not released anything (Maslej et al. 2023). Of those countries that took a position, the US at that time focused on promoting innovation and economic competitiveness, while the EU focused on ethics and fundamental rights. On October 30th the US signed their first executive order on AI (The White House 2023), which demands the creation of standards, more testing and encourages a brain gain of skilled immigrants. At a smaller scale, city-level policies on AI are also emerging; sometimes conflicting with national policies. San Francisco, for instance, banned police from using facial recognition technology in 2019. 

Ultimately, AI regulations tend to restrict AI research, which if it happened unevenly around the world would create centres of research where less regulations take place. 

How does this varied policy attitude affect the prospects of AI research? Will this lead to researchers migrating to less restricted regions? Such will be questions addressed in another blog post. 

Bibliography

Maslej, Nestor, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, et al. 2023. ‘Artificial Intelligence Index Report 2023’. arXiv. https://doi.org/10.48550/arXiv.2310.03715.

The White House. 2023. ‘FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence’. The White House. 30 October 2023. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

The post Fragmentation of AI research: a blog series appeared first on Digital Science.

]]>
Fragmentation: a divided research world? https://www.digital-science.com/tldr/article/fragmentation-a-divided-research-world/ Mon, 25 Sep 2023 07:21:26 +0000 https://www.digital-science.com/?post_type=tldr_article&p=66480 Research has the power to change lives, break down barriers and create unity & equity. When the research community solves problems together extraordinary breakthroughs can happen.

But post-pandemic, fragmentation in the research ecosystem remains one of the biggest challenges to the ability of researchers to make a real-world difference. We want to challenge the status quo, highlight the issues, and share positive ways to create better synergy and collaboration, helping to unite a divided research world.

The post Fragmentation: a divided research world? appeared first on Digital Science.

]]>
Last updated: 23rd October 2023, with new featured articles.

Is a fragmented research ecosystem slowing global progress?

Research has the power to change lives, break down barriers and create unity & equity. When the research community solves problems together extraordinary breakthroughs can happen.

But post-pandemic, fragmentation in the research ecosystem remains one of the biggest challenges to the ability of researchers to make a real-world difference. We want to challenge the status quo, highlight the issues, and share positive ways to create better synergy and collaboration, helping to unite a divided research world.  

A new campaign

Today, 25th September 2023, we at Digital Science are launching a new campaign focusing on ‘Fragmentation – A divided research world?’.

We live in an ever more connected yet fragmented world, and the research ecosystem is no exception to this. An important question comes to mind: if components of the research ecosystem are fragmented, does this mean there is fragmentation of research itself? Or, to perhaps put it more simply: is research fragmented, and if so, how?

Our campaign aims to highlight the structural features of fragmentation, by consolidating concepts and by demonstrating a number of analytical approaches through the use of Digital Science tools such as Dimensions.

There is also something intriguing about ‘fragmentation’ that we think is worth exploring in the context of the research ecosystem, and we asked ourselves what fragmentation represents in the world of research. What does it mean in academia?  What does it mean in the corporate sector? 

The processes aligned with fragmentation are difficult to capture. However, we will shed light on these through an understanding of the processes in research, including its contributors, segments and the tools making up the research ecosystem; these will form the basis of our analysis. This campaign is also tied closely into one of Digital Science’s key missions: “Advancing the research ecosystem — together, we make open, collaborative and inclusive research possible“, and we look forward to working with the community throughout this work.

Global divides and siloed knowledge

We start our campaign with a focus on global divides, where we explore some of the geographic aspects of a fragmented world, for example in the Global North and Global South countries where we know there are many disparities.  We also examine global challenges though the lens of the UN’s Sustainable Development Goals (SDGs) and evaluate global issues including big data for sustainable development.  

The campaign then moves into the domain of siloed knowledge, where we concentrate our attention on areas of research where a lack of integration can result in research findings remaining isolated, limiting their broader applicability across the research ecosystem. Bridging the fragmented nature of research knowledge gaps and promoting cross-disciplinary collaboration is another area where we provide insights. 

Bridging the divides in research

Fragmentation applies to many aspects of the research lifecycle across different contributors from academia, organisations, research funders, governments and businesses. Each are delicately networked, and none are immune to the effects of fragmentation. 

Digital Science was originally conceived of to provide new solutions in the fragmented space with its broad portfolio of companies covering various aspects of the day-to-day research life and its necessities. We understand the fragmented state of research affairs, offering bespoke solutions for individual niches.

This campaign is about analysing and telling stories of the fragmented research world, shedding a light on places where fragmentation occurs (be it, for example, silos of knowledge or global divides) and demonstrating how we can better understand the diversity of research to future-proof—and provide solid foundations for—the global research endeavour.

Rank Outsiders

Can a new ranking reverse fragmentation in higher education?

A tale of two pharmas – Global North and Global South

Perspectives on funding & collaboration, and the localisation of SDGs in the pharmaceutical industry, via a bibliometric evaluation of scientific publications.

Exploring fragmentation: a divided research world.

This article sets out what we mean by fragmentation in the context of research, and how we will explore the topic through a variety of lenses during the campaign.

A multi-dimensional approach to assessing the impact of the UN’s Sustainable Development Goals (SDGs)

In this short interview, Dr Briony Fane and Dr Juergen Wastl explain the methods behind their work on assessing how global research ties into the UN’s Sustainable Development Goals.

SDGs: A level playing field?

A new white paper on the UN SDGs shows more can be done to raise up funding and research recognition for the developing world.

SDGs research outputs per year by country income

Reaching out

If you’d like to find out more about what Digital Science does, or have an idea for and article or a topic we should cover during this campaign, please get in touch.

You can also meet our colleagues from across Digital Science at events & webinars throughout the year, including our recently relaunched Speaker Series and #FuturePub community events.

The post Fragmentation: a divided research world? appeared first on Digital Science.

]]>
Vaccine Hesitancy and the importance of Trust: An investigation using Digital Science’s Dimensions Research Integrity (DRI) https://www.digital-science.com/tldr/article/vaccine-hesitancy-and-the-importance-of-trust/ Thu, 10 Aug 2023 09:35:14 +0000 https://www.digital-science.com/?post_type=tldr_article&p=65001 With trust in research a critical issue, our team takes a detailed look at a key 'trust marker' in research publications on vaccine hesitancy.

The post Vaccine Hesitancy and the importance of Trust: An investigation using Digital Science’s Dimensions Research Integrity (DRI) appeared first on Digital Science.

]]>

“Research has integrity when it is carried out in a way that is trustworthy, ethical, and responsible”

UK Committee on Research Integrity

There is growing interest in ensuring the transparency and reproducibility of published scientific research to ensure trust. Although there have been improvements in the last few years in aspects of reproducibility and transparency (eg, data and code availability), further improvements to make research fully reproducible across disciplines. In particular, features that highlight the integrity of research should be made more prominent are still required.1 In this blog we primarily focus on data availability as a marker of trust, to understand the practice of data sharing and also to see how this is changing over time.

Digital Science’s Dimensions has recently integrated a research integrity dashboard to provide access to data for trust markers in research publications, which are hallmarks of research integrity and open science. These include statements regarding data availability, code availability, competing interests, conflict of interest, and ethics approval, all of which are the markers of trustworthiness and reproducibility.

We take a detailed look at trust markers in a particular research area, vaccine hesitancy, and evaluate the proportion of scientific research publications that report on one of the sources of trust markers. Vaccine hesitancy is defined as “a delay in acceptance, or refusal of vaccination despite availability of vaccination services”2, and is driven by a number of factors. It is a global phenomenon supported by anti-vaccination groups, fake news, and misinformation spread through social media.3

In 2019 the World Health Organization (WHO) identified vaccine hesitancy as a top global health threat.4 According to WHO, it threatens to reverse the historic global efforts to stop vaccine-preventable diseases. Vaccine hesitancy was chosen as a subject with which to explore issues concerning trust because of the nature of the research and its potential to include trust markers.5 Markers such as ethics approval, data availability, data availability status eg, supplementary files providing access to data, are likely to be a requirement from a funder and/or journal to ensure the integrity of research including its reproducibility and transparency.

Vaccine hesitancy is closely linked to the clinical sciences as a research area. However, this topic is relevant in a societal context, from a public health perspective and in understanding why there is hesitancy. We might also expect that developing effective health communications and campaigns to correct vaccine misinformation, for example, would link to the social sciences. In this context, we will also look for interdisciplinarity within the vaccine hesitancy research output and compare the coverage of data availability in the social sciences with the clinical sciences, while at the same time assessing any crossover, providing evidence of interdisciplinarity.

Figure 1: Outline of categories of Trust Markers

Research questions

1. Vaccine hesitancy and its representation in research publications based on research classifications:

  • Research, Condition and Disease Categorisation (RCDC)

2. Do Trust markers play a role in vaccine hesitancy research?

  • Looking at categories of availability within one trust marker – data availability

3. Do patterns emerge amongst the data?

  • Looking at interdisciplinarity with social sciences tagged research and clinical medicine tagged research
  • Comparisons between trust markers in research publications included pre-Covid (2017-2019) and post-Covid (2020-2022).

Methodology

1. A ‘vaccine hesitancy’ search string was sourced and adapted from a recent paper on vaccine hesitancy and Covid-19.6 The search string is included below as an Appendix

2. Relevant research publications were used to pull out data from GBQ relating to:

  • data availability 
  • data availability for top five Research, Condition and Research Categorisation (RCDC). 

3. The Dimensions Research Integrity dataset was used in conjunction with Google Big Query (GBQ) to access data relating to trust markers in research associated with vaccine hesitancy. These data feed into the Dimensions Research Integrity dashboard that is accessible in Dimensions. 

4. Python programming was used to analyse the data.

Results

To get an initial sense of the data, we first analysed the vaccine hesitancy research publications from Dimensions to ascertain the distribution of subject areas within which the research in this area is aligned. We looked at the top five RCDC areas which provide the bulk of research in this area. We then use these data to unpick the inclusion of data availability statements alongside research outputs.

Figure 2: vaccine hesitancy research by top five research, condition and disease categories7

Table 1 below demonstrates the acceleration in data availability statements in the last five years.

YearNumber of vaccine hesitancy research publications No. of vaccine hesitancy research papers including a data availability statement Percentage of vaccine hesitancy research papers including a data availability statement
20184149.8 %
201959610.2 %
20201071514 %
20212697527.9 %
20223029932.8 %
Table 1 Number of vaccine hesitancy research papers, and number of publications containing a data availability statement, over a five year time period.

To provide an example of research integrity available in the Dimensions Research Integrity dataset we explored one trust marker – data availability statements – and extracted the data attached to each of the categories of data availability. Figure 3 below displays the percentage for each category over a seven-year time period. Although there is an overall increase in data files made available on request from authors (peacock blue), the same increase has not translated to the inclusion of data made available as a file attached to the research publication. Other categories of data availability (online repository, not publicly available, etc) are small in number and show no pattern.

Figure 3: percentage of data availability statements included in vaccine hesitancy research by category of data availability. Data for this category of trust markers is available from 2016. (Please note there is no data before 2016)
Figure 4: Number and percentage of the top five Research, Condition and Disease classified (RCDC) research publications with data availability statements attached. Yellow bars refer to the number of research publications including data availability statements, and red bars highlight the percentage of the global total publications.

Figure 4 highlights the transformation in the uptake of data availability statements in published research as categorised by the RCDC classification systems available in Dimensions. We evidence an extremely small proportion of publications acknowledging a data available statement in 2011 (the year Dimensions established its reporting on trust markers) increasing to an 82% uptake in 2022. This rise in data availability is very marked and almost certainly related to the speed with which the research community responded to the Covid-19 pandemic. The arrival of Covid established a repositioning in data availability statements, either acknowledged or physically attached to vaccine hesitancy research publications. 

The word clouds below set out a representation of the most included concepts in research publications associated with vaccine hesitancy. What is noticeable is that the focus for this research is associated with a number of vaccines pre-Covid but shifts to a predominance of Covid vaccine research during the post-Covid years. 

What is also of note is that out of 147 vaccine hesitancy research publications published pre-Covid (2017-2019) 12 (8.1%) include a data availability statement, however, for research publications published post-Covid (2020-2022) we note that out of 725 vaccine hesitancy publications,190 (26%) include data availability statements. Although vaccine research turned around to respond to the Covid pandemic, and likely accounted for the marked increase in data availability, there are still signs of vaccine research generally for infectious diseases (see Figures 5 & 6).

Figure 5: word cloud of concepts appearing in research publications pre-Covid (2017-2019)
Figure 6: word cloud of concepts appearing in research publications post-Covid (2020-2022)

Identifying and understanding the social basis of vaccine hesitancy is important for matters such as future public health policy planning and developing and implementing methods to spread accurate information about the safety and effectiveness of vaccination. This would be  important for reducing or eliminating vaccine hesitancy.

Figure 7 Network visualisation of topics (concepts) featuring in vaccine hesitancy research using VOSviewer in Dimensions (https://www.dimensions.ai/blog/visualize-networks-instantly-within-dimensions/

Figure 7 displays four distinct clusters showing the connections within and between each topic area. The four clusters can be further visualised within two distinct clusters: i) two clinical/health research clusters (HPV and, more recently, a Covid related domain) and, ii) two social research clusters (religious exemption and conscientious objection – connected by the concept of ‘law’). The topic network visualisation gives us a sense of the multi- and interdisciplinary nature of vaccine related research.

Conclusions

The scientific research community is aware that the integrity and trustworthiness of their published research is of increasing importance, and research integrity practices are changing rapidly in response to this. Data transparency has played a key role in research conducted to develop a Covid vaccine. This blog demonstrates the considerable increase in the adoption of just one trust marker, data availability statements, as we move towards an era where open and trustworthy science are crucial. The more that data is made publicly available the more transparency, accountability, and democratisation of the research process is enabled.

Dimensions Research Integrity

To learn more about Dimensions Research Integrity and to request a demo or a free quote, click here: https://www.dimensions.ai/request-a-demo-or-quote/ 

Footnotes

1. https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2006930

2. https://pubmed.ncbi.nlm.nih.gov/25896383/

3. https://doi.org/10.29333/ejgm/13186

4. World Health Organization. Ten Threats to Global Health in 2019; WHO: Geneva, Switzerland, 2019.

5. Trust markers are explicit statements on a research publication such as funding, data availability, conflict of interest, author contributions, and ethical approval and represent a contract between authors and readers that proper research practices have been observed. Trust markers highlight a level of transparency within a publication and reduce the reputational risks of allowing non-compliance to research integrity policies to go unobserved.

6. https://www.ejgm.co.uk/article/analyzing-research-trends-and-patterns-on-covid-19-vaccine-hesitancy-a-bibliometric-study-from-2021-13186

7. The Research, Condition, and Disease Categorization (RCDC) is a classification scheme used by the US National Institutes of Health (NIH) for reporting required by the US Congress. The implementation of this system used automated allocation of RCDC codes to documents in Dimensions based on category definitions defined by machine learning.

Appendix

Vaccine hesitancy search string:

"vaccin* hesitan*" OR "hesitan* to vaccine*" OR "vaccin* refusal" OR "refusal to vaccine*" OR "vaccin* opposition" OR "opposit* to vaccin*" OR "antivacc* group*" OR "antivax" OR antivaxx OR antivaccination OR "object* to vaccin*" OR "resilience to vaccin*" OR "debate against vaccin*" OR "vaccin* *compliance" OR "vaccine* *adherence" OR "resist* to vaccin*" OR "incomplete vaccin*" OR "misinformation about vaccine*" OR "vaccin* misinformation" OR "vaccin* criticism*" OR "delaying vaccin*" OR "anxiety from vaccin*" OR "criticism to vaccin*" OR "barrier* to vaccin*" OR "lack of intent to vaccin*" OR "poor completion of vaccin*" OR "compulsory vaccin*" OR "negative perception about vaccin*" OR "engagement in vaccin*" OR "choice to vaccin*" OR "awareness about vaccin*" OR "knowledge about vaccin*" OR "behavi* toward vaccin*" OR "poor vaccin* uptake" OR "vaccin* uptake rate" OR "doubts about vaccine*" OR "acceptance of vaccine*" OR "acceptability of vaccine*" OR "contravers* about vaccine*" OR "fear from vaccin*" OR "belief in vaccin*" OR "mandatory vaccin*" OR "compulsory vaccin*" OR "willingness to accept vaccin*" OR "willing to accept a vaccin*" OR "parental control of child* vaccin*" OR "willingness to vaccinate" OR "willingness to accept vaccin*" OR ("religious exemption" AND vaccin*) OR "vaccin* accept*" OR "vaccin* resist*" OR "vaccin* conspiracy" OR "vaccin* skepticism" OR "accept* of the vaccin*" OR "intent* to vaccin*" OR "intent* to get vaccin*" OR "attitude* toward* vaccin*"

The post Vaccine Hesitancy and the importance of Trust: An investigation using Digital Science’s Dimensions Research Integrity (DRI) appeared first on Digital Science.

]]>
Our new avenue for interesting things https://www.digital-science.com/tldr/article/our-new-avenue-for-interesting-things/ Thu, 27 Apr 2023 18:25:36 +0000 https://www.digital-science.com/?post_type=tldr_article&p=62313 Welcome to Digital Science TL;DR, our new avenue for interesting things!

We bring you short, sharp insights into what’s going on across the Digital Science group; both through our in-house experts and in conversation with amazing people from the community. And we’ll keep it brief!

The post Our new avenue for interesting things appeared first on Digital Science.

]]>
Welcome to Digital Science TL;DR, our new avenue for interesting things!

We bring you short, sharp insights into what’s going on across the Digital Science group; both through our in-house experts and in conversation with amazing people from the community. And we’ll keep it brief!

Why TL;DR? Because we’ve all experienced the “Too long; didn’t read” feeling at times, and by explicitly calling this out we’re making sure we provide a short summary at the top of every article here. 🙂

Introducing our core team

We have a core team of five (at present!) who will be the primary authors of new content on the site, often working in collaboration with our in-house experts and those in the scientific and research community.

You can think of it like our core team acting as the lightning rods ⚡ attracting cool, exciting, and sometimes provocative content from across the Digital Science group and our wider community of partners, end users, customers and friends.

And so without further ado, please say hello to: Briony, John, Leslie, Simon and Suze!

Briony Fane

Briony Fane is Director of Researcher Engagement, Data, at Digital Science. She gained a PhD from City, University of London, and has worked both as a funded researcher and a research manager in the university sector. Briony plays a major role in investigating and contextualising data for clients and stakeholders. She identifies and documents her findings, trends and insights through the curation of customised in-depth reports. Briony has extensive knowledge of the UN Sustainable Development Goals and regularly publishes blogs on the subject, exploring and contextualising data from Dimensions.

John Hammersley

John Hammersley has always been fascinated by science, space, exploration and technology. After completing a PhD in Mathematical Physics at Durham University in 2008, he went on to help launch the world’s first driverless taxi system now operating at London’s Heathrow Airport.

John and his co-founder John Lees-Miller then created Overleaf, the hugely popular online collaborative writing platform with over eleven million users worldwide. Building on this success, John is now championing researcher and community engagement at Digital Science.

He was named as one of The Bookseller’s Rising Stars of 2015, is a mentor and alumni of the Bethnal Green Ventures start-up accelerator in London, and in his spare time (when not looking after two little ones!) likes to dance West Coast Swing and build things out of wood!

Image credit Alf Eaton. Prompt: “A founder of software company Overleaf, dancing out of an office and into London while fireworks explode. high res photo, slightly emotional.”

Leslie McIntosh

Leslie McIntosh is the VP of Research Integrity at Digital Science and dedicates her work to improving research and investigating and reducing mis- and disinformation in science.

As an academic turned entrepreneur, she founded Ripeta in 2017 to improve research quality and integrity. Now part of Digital Science, the Ripeta algorithms lead in detecting trust markers of research manuscripts. She works around the globe with governments, publishers, institutions, and companies to improve research and scientific decision-making. She has given hundreds of talks including to the US-NIH, NASA, and World Congress on Research Integrity, and consulted with the US, Canadian, and European governments.

Simon Porter

Simon Porter is VP of Research Futures at Digital Science. He has forged a career transforming university practices in how data about research is used, both from administrative and eResearch perspectives. As well as making key contributions to research information visualization, he is well known for his advocacy of Research Profiling Systems and their capability to create new opportunities for researchers.

Simon came to Digital Science from the University of Melbourne, where he worked for 15 years in roles spanning the Library, Research Administration, and Information Technology.

Suze Kundu

Suze Kundu (pronouns she/her) is a nanochemist and a science communicator. Suze is Director of Researcher and Community Engagement at Digital Science and a Trustee of the Royal Institution. Prior to her move to DS in 2018, Suze was an academic for six years, teaching at Imperial College London and the University of Surrey, having completed her undergraduate degree and PhD in Chemistry at University College London.

Suze is a presenter on many shows on the Discovery Channel, National Geographic and Curiosity Stream, a science expert on TV and radio, and a science writer for Forbes. Suze is also a public speaker, having performed demo lectures and scientific stand-up comedy at events all over the world, on topics ranging from Cocktail Chemistry to the Science of Superheroes.

Suze collects degrees like Pokémon, the latest being a Masters from Imperial College London that focused on outreach initiatives and their impact on the retention of women engineering graduates within the profession.

Suze is a catmamma and in her spare time loves dance and Disney, moshing and musical theatre.

Introducing our core topics

We are focusing our content around a set of core topics which are critical not just to the research community but to the world as a whole; at Digital Science we believe research is the single most powerful transformational force for the long-term improvement of society, and our vision is a future where a trusted, frictionless, collaborative research ecosystem helps to drives progress for all.

With this vision in mind, our five core topics at launch are: Global Challenges, Research Integrity, The Future of Research, Open Research, and Community Engagement.

These topics will no doubt continue to evolve over time, but that gives us a lot to get started with! Here’s the short summary of what those topics mean to us:

Global Challenges

Most of the world’s technical and medical innovations begin with a scientific paper. It has been said that the faster science moves, the faster the world moves.

But perhaps more importantly, society increasingly looks to science for solutions to today’s most pressing social and environmental challenges. If we’re going to face up to complex health issues, an ageing population, and the digital transformation of the world, we need science and research that is faster, more trustworthy, and more transparent.

With this in mind, we explore how science and research, and its communication, is evolving to meet the needs of our rapidly changing world.

Research Integrity

Research integrity will be a dominant theme in scholarly communications over the next decade. Challenges around ChatGPT, papermills, and fake science will only get thornier and more complex. We expect all stakeholders – research institutions, publishers, journalists, funding agencies, and many others – will need to dedicate more resources to fortify trust in science.

Even faced with these challenges, taking the idea of making research better from infancy to integration is exciting. Past and present, our team has built novel and faster ways to establish trust in research. We are happy to have grown a diverse group that will continue to develop the technical pieces needed to assess trust markers.

The Future of Research

Since its inception, Digital Science has always concerned itself with the future of research tools and infrastructure, with many of our products playing a transformative role in the way research is collaborated on, organised, described and analysed. Within this topic, we explore how Digital Science capabilities can continue to contribute to research future discussions, as well as highlighting interesting developments and initiatives that capture our imagination.

Open Research

At Digital Science, we build tools that help the researchers who will change the world. Information wants to be free and since the dawn of the web, funders have been innovating their policies to ensure that all research will become open.

Digital Science believes that Open Research will help level the playing fields and allow anyone anywhere to contribute to the advancement of knowledge. It also helps with other areas that pre-web academia struggled with. These include, reproducibility, transparency, accessibility and inclusivity.

These posts will cover the why and the how of open research, as it becomes just “research”.

Community Engagement

One of Digital Science’s founding missions was to invest in and nurture small, fledging start-ups to transform scholarly research and communication. Those founding teams now form the heart of Digital Science, and the desire to make, build, and change things for the better is at the core of what we do.

But we’ve never done that in isolation; Digital Science is a success because it’s always worked with the community, and most of us came from the world of research in one form or another!

In these community engagement posts we highlight and showcase some of the brilliant new ideas and start-ups in the wider science, research and tech communities.

What’s up next?

That’s all for this welcome post, but stay tuned for a whole batch of launch content being written as we speak! We’ll also have regular weekly posts from the team, and would love to hear from you if you have an idea for a subject we should cover, or simply if you’d like to say hello! 

You can contact us via the button in the top bar or footer, or via the social media links for our individual authors. 

Ciao for now!  

The post Our new avenue for interesting things appeared first on Digital Science.

]]>