One of the major challenges you will encounter when using algorithms or AI systems is what is referred to as algorithmic bias. Algorithmic bias refers to when algorithms or AI systems make decisions or outputs that reflect, reproduce, or reinforce unequitable social conditions.
One important consideration in engaging with GenAI is the training data used to train these systems. Outputs from GenAIs are based on trained data from the open web (which can present significant limitations/problems). A lack of diversity in the engineering and programming teams of AI models can also be a significant source of bias, potentially making an AI system a vehicle for reinforcing a singular or hegemonic identity or perspective (Collett & Dillon, 2019). While this might be a limitation of GenAI and the open web more generally, the limitation provides an opportunity to be more thoughtful and reflective on your research process. How so? Well, it is important to consider gaps in what is presented in GenAI outputs? Whose voice(s) and perspective(s) are not represented? What histories or lived experiences are not accounted for with a given topic? This reflecting should lead you to conduct more research in a holistic way - perhaps leading you to the platforms (books, articles, websites, interviews, media, more) where the knowledges of marginalized voices and perspectives are being shared.