If you want to keep up with AI, you can’t. At least that’s what it feels like. That is the truth I’ve been struggling to come to grips with as I navigate the ethics of AI use in classrooms and workplaces and observe the exponential growth of AI tools for teachers and workers. There are new developments every second. And then there are considerations so far beyond the basics of intellectual property and data sharing… so let’s get stuck in. Read on to learn about some key considerations for policy writing as well as A FLOW model for testing and reflecting once it is written.
I recently presented a workshop on AI tools in the classroom grounded with a whakatauki as well as a metaphor. I used the ocean as a metaphor because, even if we feel well-informed at the moment, it is more than likely that we’re just paddling in the breakers because the flow on effects of our use of AI is vast and deep - like an ocean. It is a wicked problem.
My choice of whakatauki is also fitting - Kia mate ururoa, kei mate wheke - to fight like a shark rather than give in like an octopus. My challenge to all users of AI is to consider the why before diving right on in. Resist the status quo and resist doing something just because it is easier. Consider the repercussions and thrash about before you make a decision. Thrash to push for better learning experiences. Thrash for better pedagogy and androgogy. Thrash to strive for a better learning experience for all learners - and don’t accept the models that are just because they are. Just because it might be fast, doesn’t mean it is good.
So many AI tools packaged for educational purposes sadly perpetuate 20th Century knowledge models and do not ‘push the envelope’. My hope is that, with more knowledge of how AI models are programmed and more knowledge of biases and preferences as well as the ‘why of AI’, that those who work in education and training will work harder to make AI do something more responsible. ethical and transformative. Potentially writing useful policy is the first step in designing clear guidelines.
What should you include in an AI policy?
AI is evolving so rapidly that it is important for schools and workplaces to ride the wave. We need to be informed about which AI tools are fit for purpose and need to be able to approve specific tools, outcomes and systems. For this, we need to have clear objectives about when the use of AI is appropriate and be able to give recommendations for how and why information can be appropriately shared.
At this stage, it is useful to consider some user stories of when and how AI might be used and also to explore scenarios with a ‘black hat’ in order to consider the long term implications.
Set boundaries and design systems:
What boundaries need to be put in place about data sharing? (No names, no confidential information, no data shared without permission)
What are the expectations of the institution or organisation? (AI can be used for ideation assistance or proof-reading but not for any published content)
Who are the people who might oversee governance of AI? (And how might it be monitored?)
Consider the legal implications:
What disclosure regulations need to be in place for data sharing?
How can you safeguard against discrimination and bias?
What are the internal processes for data breaches?
Who is liable for data breaches or authenticity queries?
What industry-specific regulations can you draw on?
What are the terms of the AI tool/s that is/are being used?
How is intellectual property safe-guarded?
What are your internal systems for governance and monitoring?
What happens if there is a breach?
Managing Risk:
What permissions and guidelines need to be established to indicate which tools are approved and how they are used?
What risk assessments need to be undertaken against specific scenarios?
What are the regulations against outputs? Must they be reviewed by humans before publication?
How will issues or incidents be documented and reviewed?
Ethics:
AI does not come without an environmental footprint - how might you mitigate your use of AI against your Sustainable Development Goals?
How can you monitor your increased environmental impact?
How might you offset your carbon footprint?
How might you make users more aware of both technical and environmental considerations?
How might you ensure the privacy of information shared?
Training:
A lot of organisations are presenting AI tools as if they are a quick win without unpacking legal, ethical and environmental considerations. How might you ensure that staff are informed and able to make informed decisions about whether their use of AI is appropriate, necessary or effective?
A FLOW model:
The following acronym could be useful for beginning your deep dive into AI.
A - Anticipate - explore some user stories or scenarios to predict different uses of AI without guidance. Use these scenarios to design policy and training to meet the needs of your participants.
F - Facilitate - facilitate a training session on the responsible and ethical uses of AI and design a workplace scenario for a test/launch activity.
L - Launch - provide a tool and some prompts and a desired output that relates to your context for employees to explore.
O - Observe - look for anomalies in the uses of AI and adapt policy accordingly. Allow questioning and exploration alongside staff as they navigate ‘the deep’.
W - Weigh - consider how effective parameters and policy are and consider the success of input vs output in terms of how the tools were used and what the outcomes were.
Repeat with a new cycle to iteratively design a flexing policy that is capable of riding the AI wave.
So what do you think? Writing AI policy and using AI is about so much more than time saving. We need to consider legal implication of data sharing and collection, governance and guidelines about purpose vs product, environmental implications, intellectual property considerations, disclosure regulations, risk assessments, best practice evolutions and more.
Some additional thinking prompts:
How can we recognise and address bias in LLM training?
How can we push for more inclusive data sets for training?
What are our processes for consent?
What might our incident protocols be if the use of AI leads to negative outcomes?
Who should be involved in feedback loops about the use of AI?
How can AI be used to ensure equitable outcomes?
The key takeaway is that we need to be critical and aware of our use of AI so that we can actively contribute to a culture of proactive accountability and sustainability.
What do you think? Did I leave anything out?
References: Create your AI Policy, Clayden Law, e-book, 2024.