International Conference
Artificial Researchers and Scientific Discoveries
September 11–13, 2023
Humanity faces grand challenges, and the hope is that scientific progress will provide us with inventions, innovations, and breakthroughs that help us meet those challenges. Scientific progress, in turn, depends crucially on scientific discoveries. Now, in light of the rapid advances we are seeing in the field of Artificial Intelligence (AI) all around us, the question arises as to how AI can contribute to new scientific discoveries, or even make them on its own. Could there even be AI researchers – and what should that mean?
This is the overarching question of the Artificial Researchers and Scientific Discoveries conference, which gathers experts from a variety of backgrounds to explore further questions such as the following: What role might AI play in scientific discovery? Could AI find something on its own and communicate the results of its research? What about its creative capabilities? Could AI meet the criteria for authorship, and how could AI contribute to science communication? Could it make suggestions for science policy?
In short, this conference is about identifying and also evaluating different requirements, possibilities, and limitations for AI researchers and AI in scientific discovery.
Attendance at the conference is free, but space is limited, and registration is required.
If you plan to attend the conference, please register via email to , including your full name, email address, and affiliation. In case the event is over-subscribed, priority will be given to those that have registered.
Please find the Conference Program here.
The conference will be held on the HHU Düsseldorf Campus in building 23.21 in rooms U1.95 and U1.97.
There are various Cafés on campus; one close to our rooms in building 23.01 in room 00.XX. For the lunch break you can use the Mensa. Please note that you can only pay in the Mensa with MensaCard or cash.
Conference dinner (by invitation only):
Wilma Wunder
Nearby hotel (if needed):
HK-Hotel Düsseldorf
Speakers & Abstracts
Marianna Bergamaschi Ganapini
(Union College, USA)
AI and Scientific Discovery
Over the past few years, AI technologies have demonstrated their capabilities in various scientific fields, including but not limited to: drug discovery, personalized medicine, climate modeling and environmental research, archaeology and anthropology. This is usually achieved by AI’s extraordinary ability to find patterns in the data (LeCun et al., 2015; Schmidhuber, 2015; Goodfellow et al., 2016). AI can also be a valuable tool in hypothesis generation, suggesting new research directions to scientists. And AI can find new scientific facts based on some background theory (Hey et al., 2009; Agrawal et al., 2018; Cockburn et al., 2018). As AI technology continues to advance, its potential for scientific discoveries is likely to grow, leading to exciting breakthroughs in various scientific disciplines. Though AI has the potential to make significant contributions to scientific discoveries, in this paper, I raise some questions surrounding the role of AI in scientific research. In particular, I raise some challenges to the claim that AI can in fact itself discover anything. I will not deny that AI can find new things since AI is able to reason abductively, generate hypotheses and adopt heuristics methods. However, no new knowledge is generated and transmitted by AI itself, at least so far. And these are necessary (albeit not sufficient) conditions for scientific discoveries. At best, for now AI can produce evidence for or signals to unknown scientific facts but its inability to represent and metacognitively assess ‘its expertise’ is a limit to its ability to know.
Samantha Copeland
(Delft University of Technology, Netherlands)
Patterns and Potential Value – Chance and AI-driven Discovery
The idea that connections and patterns exist in the data we can collect about the world that surpass any individual human’s cognitive ability to see, lies behind the promise of AI in the realm of discovery. Humans, that is, have relied often and throughout history on chance and even luck to make key discoveries about our world—much to the dismay of Bacon, for instance, who sought to replace that reliance with a new era of method. This desire to use method to get to the knowledge we know is out there but cannot access underlies the belief that algorithms can discover patterns and connections within that data (read: translated observations, often believed to be direct representations of our world…but more on that in the talk!), at a faster rate than serendipity can promise. This ‘undiscovered public knowledge’ (a la Swanson) that lies within the data, however, and as many have recently shown, is ‘public’ in a very human way. I discuss how this relates to how we conceive of serendipity—and chance itself—and that another way of thinking, one that recognizes the role of emergence and top-down causation, offers the best way to ground approaches to the uses of AI in discovery. Finally, I will comment, too briefly, on the normative aspects of this suggestion.
Jan G. Michel
(University of Düsseldorf, Germany)
Can machines make scientific discoveries?
To answer the question of whether machines can make scientific discoveries, two sub-questions need to be answered: (1) What is required to make a scientific discovery? (2) Are machines capable of meeting these requirements? After some remarks on the role of scientific progress and the significance of machines in science, I show how to conceive of scientific discoveries as structured processes with the three indispensable structural features of finding, acceptance, and knowledge (cf. Michel 2022). By elaborating on each of these features, I identify several requirements for artificial discoverers. Turning then to an objection raised by Green (2022), I address two crucial issues: First, the question of whether machines can perform speech acts (cf. Green & Michel 2022), and in particular declarative speech acts. Second, the question of how different institutionalized publication cultures realize what I call acceptance mechanisms. With this in mind, I show how Green’s objection can be met once we distinguish between degrees of acceptance in certain ways. I close with a diagnosis of what to make of the idea of artificial discoverers in science.
Ram Neta
(University of North Carolina, USA)
Deliberation, understanding, and the possibility of wondering what you are
Deliberation must involve something more than the execution of an algorithm. This is because a deliberator who executes an algorithm might still wonder “what does the conclusion of that algorithm have to do with me?” To deliberate requires understanding the subject matter of the algorithm as relevant to oneself. But such understanding requires more than just any old kind of self-representation: it requires a special kind of self-representation that philosophers call “de se”. And the standard psychological tests of self-awareness (like the Mirror Self-Recognition Test) do not test for the presence of de se representation. The most effective test for de se representation was the one described by Descartes in his Meditations: can you doubt the existence of anything you perceive, while still representing yourself as a thing that perceives it?
Michael T. Stuart
(University of York, England)
AI increases our understanding but doesn’t understand
One way to make progress in science is to increase scientific understanding. There are now many accounts of scientific understanding on offer. Perhaps the most basic notion of understanding, and the easiest to satisfy, is the having of an epistemic ability. For example, a scientist understands a phenomenon to the extent they can explain, predict, or model it. To bar things like calculators from having this kind of understanding, we should tie this notion of understanding to responsibility, such that something understands (in this sense) only when it is the reason that it has the ability it has, e.g., because intentional actions were performed to gain that ability. In particular, we are typically responsible for our abilities to the extent that we are in control of what we are doing, and we know what we are doing. AI complicates this intuitive notion of understanding, since its level of ability can be extremely high, while its level of responsibility can be extremely low. This makes it unlike anything else that we typically attribute understanding to. I conclude that while AI can contribute to scientific progress as a tool, it should not be thought of as contributing to scientific progress as a member in a collaboration, or as something with its own understanding.
Christiane Woopen
(University of Bonn & Center for Life Ethics Bonn, Germany)
Trustworthy and decision-making AI – really?
There is currently a lot of discussion about algorithmic decision-making, and artificial intelligence (AI) is expected to come close to or even soon surpass human intelligence in more and more areas. But if AI is so intelligent and also capable of making decisions, wouldn’t it only be consistent to trust AI more than humans when it comes to difficult decisions? In my contribution, I start from an action-theoretic understanding of what it means to make a decision and what conditions have to be met in order to classify something as a decision, also taking into account the concept of intelligence. From here, I discuss whether or how it is possible to talk about algorithmic decision-making in a meaningful way. In a next step, I address the ethical implications of using AI for decision-making in some exemplary contexts, in order to finally assess whether we can and should rely on humans or on AI when difficult decisions have to be made.
Kim J. Boström
(University of Münster, Germany)
Dr. Robot – Can artificial intelligence do real science?
Artificial intelligence (AI) can beat any human at the game of Go, write computer code or speak just about anything in a way that is virtually indistinguishable from a human. Recently, however, AI has also been claimed to have “discovered” previously unknown protein structures, and possibly to have “disproved” the currently accepted model of the proton, a fundamental building block of matter. So, should we expect AI to appear on the list of authors of scientific papers? I suggest that a necessary condition for a cognitive system, whether human or machine, to be legitimately considered a contributing author of a scientific discovery is that it has a sufficient level of understanding of what it has discovered. I argue that for the most compelling examples given so far, current AI does not yet meet this requirement and thus should not be included on the author list of scientific publications. There is, however, no reason to assume that this will always remain the case.
Mitchell Green
(University of Connecticut, USA)
Genuine Meaning in Artificial Speakers
Green and Michel, in ‘What Might Machines Mean?’ (Minds and Machines, vol. 32 (2022), pp. 323–8), argue that under certain conditions, artificially intelligent robots are able to perform speech acts in the traditional, semi-technical sense of ‘speech act’ traceable to Austin and Searle. In their, ‘AI Assertion’ (OSF Preprints, 2023), Butlin and Viebahn contend that Green and Michel’s showcase examples do not meet the normative standards required to make assertions. In this talk I will recount Green and Michel’s original argument, which stresses that the sorts of mental states and conceptual competence underpinning them required for the performance of many speech acts do not require consciousness. I then reply to Butlin’s and Viebahn’s challenge, showing that with a modest clarification of their position, Green and Michel can accommodate Butlin’s and Viebahn’s objection while maintaining their original contention of the possibility that artificially intelligent robots can illocute.
Joseph G. Moore
(Amherst College, USA)
Generative AI, Artistic Creativity, and Discovery
Generative artificial intelligence is transforming the ways we make, and think about art. With prompting from human users, these generative systems now produce novel and aesthetically compelling works in a variety of artistic domains. In doing so, they challenge the ways we think about artistic credit, about creativity, and about the mechanism of legal copyright, which is meant to protect and promote creativity in a capitalist art market. All of this is currently at play in the courtroom, as artists contest the ways in which their artworks can be rightfully fed into these artificial systems (“the problem of the inputs”), and others are challenged over whether they might be credited for the products (e.g., visual images) these systems generate (“the problem of the outputs”). Here I’ll first argue that the contested legal landscape surrounding these artificial systems reflects the ways they challenge our received notions of creativity and artistic credit. They challenge, specifically, what I’ll call the “agential assumption”: only fully-minded and autonomous agents, like human artists, are capable of creative art-making, and so of artistic credit. Next, I’ll explore some collective and partial (or sub-agential) models of creative processes as way of challenging this assumption. Finally, I’ll ask how these extended conceptions of creativity can be carried over to our notion of discovery.
Michael Ohl
(Museum für Naturkunde Berlin & Humboldt University Berlin, Germany)
How many species are there?
Biological species are considered fundamental entities that allow us to make generalized statements about nature. Despite a controversial debate about the conceptual nature of species that continues to this day, they play a central role in the research and application practice of the life sciences. Species are usually treated as if they exist independently of human cognition as an evolutionary product and can thus be discovered. Hypotheses about the existence of species are linguistically fixed in the form of formalized species descriptions and species names. The formation of a scientific species name and its publication in a relevant publication organ are elementary parts of the practice of species discovery.
Surprisingly, even the total number of formally named and thus discovered species of multicellular organisms is not reliably known. It is on the order of 1.5 to 1.8 million species. Completely unknown, moreover, is the number of species that actually exist on Earth. Currently, about 18,000 new species are discovered worldwide every year. Estimates of how many species are still undiscovered vary between 5 and 100 million, depending on the estimation method used. In the context of the current societal debate on climate change and species extinction, the question of the actual extent of global and local species diversity is more than an academic discussion. Species numbers are important metrics for assessing habitats and are used as a basis for politically motivated decisions in nature conservation.
At the same time, there is critical discussion about whether all these still unknown species should actually be discovered, despite the immense financial effort involved. From a scientific perspective, there are compelling reasons to discover all species on Earth. On the other hand, the current extinction rate apparently exceeds the discovery rate. The “Linnaean program” of a complete coverage of all species on Earth within a manageable time is therefore a high priority in the life sciences. The necessary acceleration of the discovery process requires new technologies that include automation and AI.
Emily Sullivan
(Eindhoven University of Technology & the Eindhoven AI Systems Institute, Netherlands)
ML in science: Just a toy?
More and more sciences are turning to machine learning (ML) technologies to solve long-standing problems or make new discoveries—ranging from medical science to fundamental physics and biology. The ever-growing fingerprint ML modeling has on the production of scientific knowledge and understanding comes with opportunity and also pressing challenges. In this talk, I discuss how philosophy of science and epistemology can help us understand the potential and limits of ML used for science. Specifically, I will argue that ML models in science function in a similar way that highly idealized toy models do. Thinking of ML models as toy models can help to shed light on the scope of ML’s potential for scientific understanding and scientific discovery.