Week 1: Decide Technology Policy in the Classroom
In-class exercise: Discuss the experiences of students using “technology” as part of their education, both in the classroom (eg laptops, smartphones, airpods) and outside of it (eg ChatGPT, GroupMe, Google Translate).
Begin by introducing Values: the normative principles that students think are relevant for their semester in this class.
Then discuss Tools: the analytical methods by which they (or anyone) might come to know which policies produce the best outcomes with respect to these different Values.
Conclude with Actions: in this case, what procedure should be used to select the policies? Consider:
Democracy: majority vote by students
Technocracy: expert (the teacher) decides
Traditionalism: ban all technology use
Conservatism: look up other classes (in the major/university) and adopt the most common policy
Liberalism: everyone does what they want, but individuals who feel their experience infringed upon can try to persuade others
Liberal socialism: everyone does what they want – subject to everyone in the class having equal technological access
Experimentalism: the policy is randomized – either for the whole semester, class-by-class, or student-by-student
Constitutionalism: for any of the above, should there be a procedure to revisit or amend this policy during the course?
As the instructor, be sure to highlight what has been taken for granted / is necessarily given:
This is happening as part of a university class. This class is governed by laws and policies, it has to meet at set times during week, it has to have assignments and grades.
The “discussions” mentioned above are by default taking place by people taking turns speaking in person, usually by volunteering. We could be using other technologies or other rules to communicate. How should we choose how to decide how to choose?
One purpose of this exercise is to illustrate how certain things in our society are changing while others are remaining constant, how complicated it is to integrate new technology into existing social structures.
Another purpose, the main purpose, is to demonstrate to students that they can actually decide which technologies should be used, and how.
Meta-Assignment (for the instructor): actually implement and (at least attempt to) enforce the students’ chosen technology policy for the duration of the course. Reflect on how it went: what had your technology policy been before this assignment? What policy would you choose now?
Report back to the rest of us. We all really want to know how it went.
Here’s the rest of the Syllabus of Actions. Good luck!
There are several important but seemingly disjoint conversations around technology use by young people.
Jonathan Haidt and Jean Twenge have been arguing that social media use is harmful for children and teenagers, preventing healthy development and causing anxiety.
The US House of Representatives and I have been arguing for banning TikTok, for a variety of reasons.
And seemingly every college professor is tearing their hair out about LLMs.
These conversations are only seemingly disjoint; the general problem is helplessness. The hardware is in place, the phones are ubiquitous. It would be unimaginably ~dangerous~ for parents and children to not have the 24/7 capacity for communication. So now the software companies compete for our attention in every waking moment, shipping new products generated with the data collected from the old products, forcing the rest of us to deal with it.
The individual, the family, the classroom — none of these entities has the capacity to resist new digital technology. These are radical monopolies, in that they compel adoption by imposing costs on non-users.
Capital-driven technological “progress” is restricting the possibility of human freedom. As Ivan Illich puts it, “An unlimited rate of change makes lawful community meaningless.”
The first step away from helplessness is thinking about the world we want to live in, rather than letting tech companies set the agenda. The second step is the practice of actually taking actions other than the actions allowed/encouraged by the tech companies: we cannot like, view, or post our way out of this.
Institutional education is far from innocent here; the sixteen years spent teaching children and adolescents to compress their intellect and creativity into a format that makes it easiest for us to evaluate them are excellent training for becoming social media “creators.”
But institutional education is something that I and my colleagues have some control over, and therefor the best place for us to start. Hence, the Syllabus of Actions, the result of last fall’s Building the Society We Want workshop, co-hosted by Princeton’s Center for Information Technology Policy and Center for Human Values.
The link below provides context for the workshop:
But the central theme was rejecting the premise that any new technology constitutes “progress” except insofar as it causes too many identifiable “harms.”
The Syllabus is organized around Actions. The default Actions for a university course are reading, writing, discussing — perhaps some more formal symbol manipulation (coding, math) or instrument use.
Most of the Actions involve interacting with the world outside of the classroom — an advantage of digital technology is that we can in fact use it in disruptive or empowering ways, when we take the initiative to avoid using it in the way that maximizes the revenues of the producers of the hardware/software.
Taking Actions re-emphasizes our embodied human nature. To the machines at the heart of digital media companies, we are merely eyeballs, cochlea and fingertips — connected to a credit card. The more time we spend in their shitty little free-to-play video game, the more we reduce ourselves to those eyeballs, cochlea and fingertips. Obviously, there’s only so much physical Action that can take place within the given reality of a university class — a more radical Syllabus for a class on the morality of technology might begin with ecstatic dance and end with a shift at an elderly care facility — but even when Acting within the affordances of digital hardware, we can reclaim our own agency by disrupting the cycle of normalization in digital life.
Week 10 — contribute to a public good by editing Wikipedia. Make the good part of the internet better. Week 13 — perform online gig work. Label photos for use to train the next generative image model, Google Capricorn or whatever. Do it wrong.
The central question of this course is reflected in the assignment for Week 1. What’s the ideal technology policy for society?
I don’t know. But I do know that we’re never going to find out if the only relevant Actors in this space are tech companies. Try acting differently — and report back. We all want to know how it goes.
Thanks for sharing this, Kevin. I really like this and just downloaded the full PDF from the workshop. Given my end-of-semester crunch I probably won't have the chance to look in much detail before sometime in May but want to try some of these ideas out in my classes this fall.
Would love to follow up more once I get a chance to digest it more.
Like Josh said, this is well-timed for thinking about teaching in the fall. I have been planning an in-class exercise on the first day of my history class to discuss norms and decide policies about the use of technology, including and especially generative AI. Love the straightforward political vocabulary to introduce procedures for determining how we decide. I will include these terms when I write up the exercise. I will post the write up in May and report about how it goes in September.