I had a wonderful and insightful conversation with my students yesterday, and I cannot stop thinking about it. They provide such a unique, unfiltered perspective filled with their past experiences. Their experience is what I appreciate most in class. In fact, I urge them to relate the sociological theories we discuss in class — which can be very dry — to their life. The theory is important, but how we engage with theory every day is more important in my eyes. Human interaction and lived experiences are catalysts that encourage the exchange of knowledge. This is where it starts to get extremely philosophical with terms like ontology and epistemology. I’ll try to stray away from getting too philosophical because I don’t want this to run long. However, it is important to think about how humans exchange information and knowledge through a philosophical lens because people are involved in the process.
I view the world through a lens that is different than some researchers in the same field as me. I believe we do not exist independently from the outside world; there are many social worlds that are influenced by context and a social environment (family, church, nation-state, etc.) Others sometimes believe that the world consists of patterns and regularities and it is up to us, as researchers, to use our senses to observe them objectively, which is next to impossible. Science isn’t without human bias. We pick what we study, how we study, and for whom depending on who is funding the research. But ultimately, social context doesn’t play a huge role in this world view.
A potential for A.I. in academia is to fill the shoes of a researcher or even a student. There are multiple accounts of students using ChatGPT (chatbot) to write papers and theses. ChatGPT is a chatbot that was developed by OpenAI and launched late last year. It can be used for all sorts of functions: searching, chatting, and writing. As a researcher, it is very troubling to me that something like this has the capability to write a full dissertation without any kind of real world interaction. The function is only acting in the online platform, pulling data from online databases. This form of exchanging information and knowledge is lacking the human interaction that I think is so important. Of course the data the chatbot is using was written by a human, but there is a lack of humanness in the process. I do think A.I. and chatbots similar to ChatGPT can serve an important purpose. Quantitative data gathering doesn’t necessarily involve human interaction every time. But then again, the process of gathering this data is excluding a person from this process, negating any kind of ethical pretense.
Don’t get me wrong, I love technology. I’m writing this on a laptop that helps me do all kinds of tasks and very efficiently. But I wouldn’t substitute my laptop for any kind of human interaction. It provides me with a world of knowledge at the click of a button. It cannot, however, give me a visceral human interaction. Unless someone figured out how to pass a Turing test. Moral of the thought, I just think we should be very careful with how we approach A.I. in academia. Just because we can do it, doesn’t mean we necessarily should. Philosophy and science should stand together if we don’t want to lose what makes us human.
If you have any thoughts, please leave a comment. Engagement is what keeps the conversation moving! Thank you.