It challenges our traditional understanding of knowledge - particularly the “shared knowledge.” This development raises fundamental questions within epistemology and forces us to rethink how we define, produce, and share knowledge in our society.

Before addressing how artificial intelligence challenges our shared knowledge base and how we might respond to it, defining what we mean by knowledge, science, and shared knowledge is essential.

What Are Knowledge and Science?

As a discipline, epistemology has long grappled with questions about what knowledge is, how it is acquired, and how its validity can be ensured. At the same time, ontology—the study of being and the fundamental nature of reality—has played a crucial role in shaping our understanding of what exists and how various entities relate. These two philosophical branches are closely interconnected: our perception of reality (ontology) influences what we consider valid knowledge (epistemology).

Since the Enlightenment, or perhaps even since Gutenberg printed the first book, we have built a tradition of creating and sharing a common knowledge base. This tradition has been pivotal for scientific progress and societal development. It is rooted in Plato’s classical definition of knowledge as “justified true belief,” which remains central to modern epistemology.

💡
The Danish Dictionary defines knowledge as:
“Everything a person has learned about one or more subjects through experience or education.”

The Great Danish Encyclopedia defines science as:
“Science is a general term for systematic methods of generating, organizing, and disseminating knowledge and skills, as well as the results of these activities and the organizational forms and administrative units (such as fields and disciplines) under which they take place.”

Science acquiring, organizing, and disseminating knowledge, as well as the knowledge obtained through scientific methods.

These definitions emphasize that we acquire knowledge through active learning and experience.

Science encompasses many different aspects. Søren Kjørup, a professor of the philosophy of science, describes science as:

“A collective term for a multitude of diverse, partially overlapping disciplines that, from different perspectives, can be grouped together (though likely never entirely consistently) into various categories.”

We typically categorize science into natural sciences, social sciences, humanities (humanistic sciences), health sciences, and technical sciences, each with its own subject areas, methods, and explanatory models.

When combining science and knowledge, we arrive at scientific knowledge. Scientific knowledge is based on scientific investigations that can be tested and justified through sources, observations, and/or experiments.

In scientific research, high quality and credibility are ensured by publishing results, for example, in scientific journals. This involves a quality assurance process, such as peer review or other forms of scholarly evaluation. Researchers share their findings to benefit society and enable other researchers to build upon them. At the same time, researchers are bound by principles of good scientific practice, including shared norms for scientific work, safeguarding laws on scientific misconduct, and their credibility.

The concept of shared knowledge refers to the collective understanding and information shared among society or professional group members. Shared knowledge enables effective communication, collaboration, and progress in science and other fields.

However, phenomenology emphasizes that all knowledge is created within a social, cultural, and historical context and that researchers always interpret based on their preconceptions. Similarly, hermeneutics stresses that a researcher can never be entirely neutral or objective. Therefore, researchers must make their choices visible and articulate their preconceptions in their reporting.

The philosophy of science distinguishes between different forms of knowledge and various scientific approaches. Natural sciences, social sciences, and humanities each have distinct methods and epistemological foundations. Yet, they share a common principle: scientific knowledge must be based on systematic investigations that can be tested and justified.

We should always ask ourselves: How did we acquire the knowledge we possess? And what is our knowledge about? These are, respectively, epistemological and ontological perspectives on knowledge.

How is artificial intelligence challenging our shared knowledge base?

Generative artificial intelligence can lead to new challenges regarding understanding shared knowledge and scientific methods. When we traditionally search for information in articles, books, or the internet, we select sources ourselves. In most cases, we know where the sources come from and can, therefore, more easily validate the credibility and sender's intentions. Internet searching is subject to algorithmic power, and we don't necessarily get objective answers but answers that primarily serve unknown and opaque algorithmic criteria and often advertisers' interests, just as the internet is flooded with more and more false stories and misinformation. However, in most cases, we can see who the sender is and, therefore, try to assess the credibility of the source. We are, however, beginning to see the use of generative artificial intelligence in writing peer-reviewed scientific articles - a source we traditionally consider very credible. This can challenge our trust and thus make it even harder to find a reliable shared knowledge base.

Generative artificial intelligence systems produce information and suggest solutions based on extensive datasets and probabilities without explaining how these results are achieved. The biggest challenges are missing references, incomplete data (there's no way to know if a dataset is entirely representative), non-transparent datasets, and potentially significant bias in the datasets - generative artificial intelligence is no better than the data it's trained on! This challenges our traditional understanding of what constitutes valid knowledge and how to verify it. Things go completely wrong when internet search engines incorporate generative artificial intelligence, which can provide summaries of searches and thus also select what's most important in a search result. A study conducted by DJØF among their students shows that 85% use ChatGPT as a search engine for their studies, and 23% also use it for exams!

Our approach to source criticism is further challenged when search functions are integrated into language models, as OpenAI does in the latest version of ChatGPT, called ChatGPT search. OpenAI's alternative to traditional search engines like Google and Bing delivers a concentrated search result with references to more or less credible sources. This raises new questions about how we critically evaluate information. At viden.ai, we will soon explore what impact ChatGPT's search function might have on the education sector.

A central scientific theoretical challenge in this context is the question of epistemic dependence. As we increasingly use AI systems for information searching and decision-making, we risk becoming dependent on these systems in a way that undermines our ability for critical thinking and independent knowledge production. We are also subject to automation bias, where users quickly come to trust too much in the compelling answers that the systems provide, and confirmation bias, where the system's suggestions are easily weighted differently depending on whether they confirm or contradict one's understanding. We simplify complex issues and information (simplification bias) and thus miss nuances and details.

Several teachers have told me that students have confronted them and claimed that the just-covered material is wrong.

"ChatGPT says something else, and I trust it more"

In one case, the teacher seized the situation and investigated with the students. It turned out that the answer from artificial intelligence was heavily biased and probably came from a different worldview reflected in the underlying data.

This raises questions about how we can balance exploiting technology's possibilities and preserving our cognitive autonomy. At the same time, artificial intelligence challenges the ability to generate different perspectives on the same topic, the idea of an objective truth - a central concept in many scientific theoretical traditions. This can lead to epistemic relativism, where truth is considered relative to a particular frame of reference or context, which becomes dependent on, for example, the user's technological understanding, basic professional knowledge, and ability to prompt artificial intelligence. This development forces us to reconsider how we can maintain the idea of shared knowledge in a world where many potentially conflicting perspectives can easily be generated in, for example, a classroom.

This development raises particular challenges for the education system, which has traditionally been based on the dissemination of shared knowledge through standardized curricula, common professional goals, and curricula. How do we ensure that pupils and students develop a shared knowledge base when they can access (and potentially diverge information from) artificial intelligence? This question is not just pedagogical but deeply scientific-theoretical, as it touches the core of what we understand by knowledge and how it should be conveyed.

If each student receives different explanations - perhaps just minor nuanced differences - on the same topic, we risk losing the shared knowledge base necessary to create a coherent and functional discourse in school and society.

One of the most fascinating scientific theoretical issues that artificial intelligence raises is the question of the relationship between knowledge and understanding. When artificial intelligence can generate meaningful content without real experience, it challenges our perception of what it means to "know" something. This forces us to reconsider the relationship between information, knowledge, and understanding - central concepts in epistemology.

Generative artificial intelligence can also be seen as an epistemic technology – a tool that processes information and actively shapes what is considered knowledge. They are designed to derive patterns and generate output that mimics human reasoning without transparency about the underlying processes. This ambiguity challenges traditional epistemological principles, which highly value the understanding and verification of the processes through which knowledge is produced.

What can we do about these challenges? To address these challenges and preserve shared knowledge, we must strengthen our focus on scientific theory and critical thinking. We must develop new methods to validate and integrate AI-generated information into our shared knowledge base while maintaining the fundamental scientific method and essential principles of inquiry.

This requires (perhaps) a new approach to education and research, where the focus is not just on accumulating information but on developing the ability to evaluate, contextualize, and critically apply information, regardless of origin. It also involves a deeper understanding of the epistemological and ontological assumptions underlying our traditional scientific methods and the new AI-based approaches to knowledge production.

It also challenges our traditional understanding of what constitutes valid knowledge and how to verify it. At the same time, it raises ontological questions about what exists in a world where artificial intelligence can create convincing "versions" of reality.

Artificial intelligence allows us to revisit and perhaps expand central scientific theoretical concepts such as objectivity, validity, and reliability. It forces us to consider how we can maintain shared knowledge in a world where information is easily accessible but where the quality and reliability of this information are often unclear.

Ultimately, it's about balancing exploiting the potential of generative artificial intelligence while preserving and strengthening the fundamental principles of scientific method and critical thinking. Only by doing this can we ensure that our shared knowledge base remains solid and credible. This challenge is relevant for academics, researchers, and the entire society. In a world where information and thus also misinformation spread at lightning speed, it is crucial that we all develop a deeper understanding of the thoughts behind scientific theory and epistemology. It is a challenge requiring joint effort from educational institutions, researchers, and decision-makers.

By taking this challenge seriously and actively working to develop new approaches to knowledge and scientific theory, we can not only address the threats from generative artificial intelligence but also harness its potential to enrich our shared knowledge base. We must continue to ask critical questions, challenge our assumptions, and actively engage in the epistemological and ethical debates that this technological revolution generates. It also requires that we are curious and learn about and with the technology to find the right ways to use it together.

For this to be possible, updating the goals, curricula and rules our education system builds upon is necessary. We need to add mandatory goals in curricula that incorporate the new technology as a subject matter. We also need to allow critical and reflective use in exams and tests if artificial intelligence is not to short-circuit our education system and thus challenge our shared knowledge base. At the same time, we must remember the importance of having solid basic professional knowledge. We still need to learn without technology to build a strong knowledge foundation that enables us to function when technology is unavailable. Our basic professional understanding also allows us to utilize technology correctly, for example, prompt precisely and with the proper context, and just as importantly, assess whether the compelling answers from generative artificial intelligence are correct. Only by learning about and with technology and testing pupils' and students' knowledge about and application of technology can we ensure that the knowledge society's foundation persists.

In this transition process, we must remember that shared knowledge is not just an academic concept but a cornerstone in our trust-based society. Through our shared understanding and knowledge, we can communicate meaningfully, collaborate effectively, and make informed decisions as a society. Preserving and developing this shared knowledge base in a time of generative artificial intelligence is not just a scientific theoretical challenge but a societal necessity.

Sources:

  • https://danmarkshistorien.dk/vis/materiale/oplysningstiden
  • https://denstoredanske.lex.dk
  • https://denstoredanske.lex.dk/videnskab
  • Forskning og samfund - en grundbog i videnskabsteori, Søren Kjørup, Gyldendal Undervisning, 2003, (2. udgave)
  • Oh, P.S., Lee, GG. Confronting Imminent Challenges in Humane Epistemic Agency in Science Education: An Interview with ChatGPT. Sci & Educ (2024).
  • Innes, Judith E., & Booher, David E. (2018). Planning with complexity: An introduction to collaborative rationality for public policy (Second edition). Routledge, Taylor & Francis Group.
  • Scardamalia, M. (2002). Collective Cognitive Responsibility for the Advancement of Knowledge. In B. Smith (Ed.), Liberal Education in a Knowledge Society (pp. 67-98). Chicago, IL: Open Court.
  • Gillies, Donald. (2020). Artificial Intelligence and Philosophy of Science from 1990s to 2020. 
  • Ward, Adrian F. (2021). People mistake the internet’s knowledge for their own. Proceedings of the National Academy of Sciences, 118(43).
  • Krause-Jensen, N. H., & Hansen, B. B. (2024). Hvad skal vi bruge videnskabsteori til? Om videnskabsteori på professionsuddannelser. Tidsskrift for Professionsstudier, 19(37), 96–107.
  • https://misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/
  • https://www.information.dk/debat/2023/07/humaniora-krisen-akademiske-selvkritik
  • Humaniora – er essay, Mikkel Thorup, Aarhus Universitetsforlag 2022
  • https://www.dtu.dk/forskning/rammer-for-forskningen/principper-for-god-videnskabelig-adfaerd
  • https://forskerportalen.dk/da/generelt-om-publicering/#:~:text=Publicering%20af%20forskningsresultater%20sker%20navnlig,har%20udgivet%20videnskabelige%20publikationer%20kommercielt.
  • https://www.retsinformation.dk/eli/lta/2017/383