Skip to content

GPTs, Education and (anti)utopic views

This is a short reflection on recommended literature in a reading seminar. I chose two papers and two of Feenberg’s books from the recommended pile. I have read the papers carefully and I aim to reflect on and try to compare them briefly. When it comes to the remaining books, I mostly skimmed them except for some of their parts, and I won't reflect on them separately.

Chan, A. (2022). GPT-3 and InstructGPT: Technological dystopianism, utopianism, and “Contextual” perspectives in AI ethics and industry. AI and Ethics . https://doi.org/10.1007/s43681-022-00148-6

The first article published in AI and Ethics journal briefly described the development of GPT-3 and InstructGPT in recent years and emphasized the potential harms of the technology, in particular flowing from the manipulation and bias. The “contextualist perspective” representing the idea that consequences depend upon the context of usage is mentioned as an approach that evaluates potential trade-offs for the individual contexts of the usage of AI technology. I think this relativisation leads to synthesizing views to oversimplifying attitudes towards AI and makes discussion around AI ethics more productive. It's also much closer to the current state of the art in AI. Most teams building AI applications build models using supervised learning on predefined data sets. The generality of such AIs thus depends on the variability of their data sets. A good amount of curation in building and managing the data sets and test scenarios is much needed. Not to because we need to ensure that AI models are free of biases, but to know the biases in the first place. I believe the new generation of MLOps software will make a positive contribution to the evolution of AI applications. The process of working with datasets, making sure we know them and that we test the designed AI models. The curation of data sets is something AI might be good at as well.

Schäffer, B., & Lieder, F. R. (2022). Distributed interpretation – teaching reconstructive methods in the social sciences supported by artificial intelligence. Journal of Research on Technology in Education , 1–14. https://doi.org/10.1080/15391523.2022.2148786

This article discusses the potential utilization of GPTs in teaching qualitative methods of deep interpretation by embedding AI assistance into the qualitative data analysis software. The motivation was rooted in the difference between knowing the reconstructive methods versus the actual process of discovery usually supported by research workshops. The case study shows examples of using GPTs to generate interpretations based on prompt engineering and then discusses the opinions of doctoral candidates on the possibility to use such AI applications. Finally, the possible use of AI in this setting is described as a distributed interpretation because AI acts as an extension or companion in the research. I have limited experience with qualitative research such as in this article, but I can appreciate the topics that stem from this study. The use of AI challenges traditional research practice and pushes us to approach work in a way that makes us productive in cooperation with AI. It's not only about making today's research practice more efficient but completely different. At the same time, AI is still seen as a black box. I think that making AI more self-explanatory is a matter of breaking its decision steps into multiple phases in which we might want to know its reasoning. This can be done by building special self-explanatory models together with prompt engineering explicitly asking the AI to break down the assumptions made during interpretation.

Conclusion

The above papers show a productive perspective on AI adoption. The latter case study on distributed interpretation is a good example of a contextual perspective from the former article on AI ethics. The biases and low transparency of the inner workings of AI applications are shared topics in both papers. I like to think about the way that narrowing the context and defining the scope of usage helps to tackle the biases and set the required levels of transparency in the AI’s reasoning. I would like to dismiss the question of whether technology such as GPTs in education leads towards utopia or dystopia. Although I see the terms as useful for communication, they are also very problematic. Personally, it seems to me that both of these terms might refer to one thing. There is always something dystopic in the idea of utopia and vice versa. Similarly, technology might be the reason for society’s despair as well as flourishing even in the extreme scenario when technology gains total control. Drawing on Feenberg, my opinion is that society is and will be able to see through and control AI in what we already traditionally call critical applications. It's much harder to hold a similar opinion when it comes to applications to which we don't pay as much attention while they are critical because of their mass scale of usage or subtleness and which society is not used to reflecting on. Some of my open questions on AI in education are: what strategies for working together with AI do we need to develop? How do develop more transparent models (not only data sets) and communicate the reasoning of AI models well? How do we give voice and ownership of AI to its end users?