Long and Magerko (2020) define AI literacy as a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace.
Within education, it is a tool to use to enhance your learning and like all tools, needs to be used in a ethical and responsible way (University of Calgary, n.d.).
Image credit: Carl Heyerdahl on Unsplash
When we are doing research online, we need to think critically about the sources we use and if we want to build our research off these sources. Some questions we ask ourselves are:
We also must ask ourselves questions when using AI software tools. The LibrAIry has created the ROBOT test to consider when using AI technology.
Reliability
Objective
Bias
Ownership
Type
Reliability
Objective
Bias
Owner
Type
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
To cite in APA: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test
Algorithm:
Algorithms are the “brains” of an AI system and what determines decisions in other words, algorithms are the rules for what actions the AI system takes. Machine learning algorithms can discover their own rules (see Machine learning for more) or be rule-based where human programmers give the rules.
Chat-based generative pre-trained transformer (ChatGPT):
A tool built with a type of AI model called natural language processing (see definition below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).
Machine Learning (ML):
Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.
Natural Language Processing (NLP):
Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).
Training Data:
This is the data used to train the algorithm or machine learning model. It has been generated by humans in their work or other contexts in their past. While it sounds simple, training data is so important because the wrong data can perpetuate systemic biases. If you are training a system to help with hiring people, and you use data from existing companies, you will be training that system to hire the kind of people who are already there. Algorithms take on the biases that are already inside the data. People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)
Thank you to the Center for Integrative Research in Computing and Learning Sciences (CIRCLS) for these definitions from their Glossary of Artificial Intelligence.
Check out this video to learn more about the limitations of Artificial Intelligence.
As the role of Artificial Intelligence grows in our daily lives, there are limitations-as noted above-and ideas that machine learning continues to struggle with. Indigenous knowledges have an emphasis on both place-based learning, interconnectedness, and relationality, which AI does not reconcile with easily (Lewis, 2020). As AI tools have been shown to be bias towards communities of colour and a tool of colonial knowledge, there is discussion surrounding the relationship between AI and Indigenous knowledges. Below are some selected readings that discuss this further.
Highlights from a University of Alberta event with guest speaker Jason Edward Lewis, a design and computation arts expert, discussing AI is being developed with built-in biases.
A recorded one hour event with Professor Jason Edward Lewis on how can Indigenous epistemologies and ontologies contribute to the global conversation regarding society and AI? From the University of Ottawa.
The Indigenous Protocol and Artificial Intelligence (A.I.) Working Group develops new conceptual and practical approaches to building the next generation of A.I. systems.
A position paper on Indigenous Protocol (IP) and Artificial Intelligence (AI) is a starting place for those who want to design and create AI from an ethical position that centers Indigenous concerns. From Concordia University.
Indigenous Protocol and Artificial Intelligence Working Group Position Paper
An essay by Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite discussing Indigenous Epistemologies and artificial intelligence.
In this paper the authors share their journey starting with an international group of Indigenous technologists at the inaugural workshop series in Hawaii in 2019, leading to the IP//AI Incubator in March, 2021.
Key learnings from the foundations of these works were the need for Indigenous AI to be regional in nature, conception, design and development, to be tethered to localised Indigenous laws inherent to Country, to be guided by local protocols to create the diverse standards and programming logic required for the developmental processes of AI, and to be designed with our future cultural interrelationships and interactions with AIs in mind.
A paper by Joichi Ito looking at different ways of flourishing with technology.
A podcast from McGill University professor Noelani Arista explains how some Indigenous communities are thinking about their relation to technology and the data that feeds artificial intelligence.
Thank you to the University of Calgary and Bronte Chiang for the information in this guide.