Episode 1 - People Talking in the Hallways
E1

Episode 1 - People Talking in the Hallways

B’n’B - Episode1

Student 1 00:00
have you found most people are receptive to AI?

Bill Williamson 00:04
There's a wide range of reactions. Yeah,

Bill De Herder 00:08
a lot of it will fall to extreme perils or extreme utopias. Yeah, as a result of of using the technology, they think that it's going to be magical and it's going to destroy us or save us.

Bill Williamson 00:18
We try to be more measured and pragmatic and only see it for what it is and what it can do and what it might accomplish. We want to welcome you to bills and bots. This is a new podcast that we're launching that focuses on AI in writing assignment.

Bill De Herder 00:33
That's right. It's the Attack of the Killer essay writing robots. And our podcast is the tinfoil hat. You need to protect yourself

Bill Williamson 00:40
I'm Bill Williamson from SVSU's Rhetoric and Professional Writing Department and professor of technical communication

Bill De Herder 00:46
and I'm Bill De Herder, I'm the director of the Writing Center here at Saginaw Valley State University.

Bill Williamson 00:49
It's the run up to the end of the semester here, dealing with all of the all of the things that we deal with in the run up to the end of the semester, which is mostly student stress.

Bill De Herder 00:57
Yeah, yeah, they're super, they're super chewed up, burned out. They're difficult to motivate, beyond telling them to take a nap.

Bill Williamson 01:06
This is the time though when you know, the truly desperate are going to reach for AI to do something for the things that they think they don't have the time to do.

Bill De Herder 01:12
Yes, yes. But they won't have the skill set to be able to get anything useful out of the machine now, because they haven't listened to our podcast yet. But I did hear Charlie telling me that he used to write a discussion post, which goes back to me shaking my stick to the sky, saying that the discussion post genre of assignment is dead, unless you're gonna do something pretty crafty with it.

Bill Williamson 01:36
Yeah, you know, it's one of those things, I should probably die with the pandemic. So both of us ran down this rabbit hole, I guess, a wormhole. We entered into the realm of AI, but a year ago when it all exploded. And we've been focusing on this kind of stuff in our research and our teaching and our writing center work. And we're going to try to share some knowledge with you. And we're going to try to help you through some things we've been asking people for questions. And this is where we launch into some of the conversation.

Bill De Herder 02:07
For this episode, we headed out to a busy commons area on Saginaw Valley State University's campus, we wanted to talk to undergraduate students to find out how they're thinking and feeling about generative AI. And a lot of AI related news plays on anxieties about the future, we wondered how he's emotionally driven narratives might be shaping student views about college writing, and life after graduation, we met students from a variety of academic backgrounds and a wide span of experience with artificial intelligence, and different stages of their college careers. We had some interesting conversations, we're gonna play some clips of what these students had to say about working with AI and their worries about AI. And we're gonna respond to some of the things that they're saying,

Bill Williamson 02:54
It was amazing that we hung out in this area for what like two two and a half hours or something like that, in you know, a number of people stopped by and it was great that we had, you know, a nice little cross section or microcosm or whatever you want to call it of ideas about AI and he said, you know, the perils, the promises, the fears, all of that and so, we'll Yeah, we'll see where we go.

Student 2 03:21
When I use AI for writing I usually just use it to generate ideas or if I need like a last me like grammar checker, but I don't really like rely on that. So I usually just typically use like the Grammarly and I definitely have a lot of concerns about AI but not a lot in writing but in art because as a graphic design major, I worry that it will either start limiting jobs or it will start feeding my own designs into the AI and steal it.

Bill De Herder 03:47
This this graphic design student began by talking about artificial intelligence and then dropped the brand name Grammarly. Right and I feel like most instructors when they're trying to ban artificial intelligence in their classroom, they don't realize that Grammarly counts and Grammarly is now advertising themselves to all users as having this built in LLM technology,

Bill Williamson 04:10
And it has for a while. That's that's one of the hidden little secrets there is that people do not process how many of these tools that we have become comfortable with and accustomed to have actually had aI elements for a very long time. Yeah.

Bill De Herder 04:25
And so even if you're just using the built in grammar checker, it's gonna make you sound more like, you know, statistically more like how these MLMs sound when they just put something out. Right.

Bill Williamson 04:40
And of course, Grammarly is like it's marketed for academics. It's marketed for people who are in academic settings who are doing scholarly kinds of writing, you know, at least the version of it that we practice at undergraduate or graduate level education. Maybe it's good for professional settings as well, but the idea behind it is that that it is that carefully cultivated voice of intelligent sounding or educated sounding writing,

Bill De Herder 05:08
there's, there's so much around this for me to be frustrated about, because academic writing forces students to behave in very in writing in very inelegant ways, right, they have to perform in very particular ways using particular language. And then you have these artificial intelligence systems that are designed to more and more just sound more and more like that. So these students are compressed to write more woodenly. The systems pursue that more wooden, academic language. And then the students get ultimately accused of using artificial intelligence, because something out there is trying to sound like them.

Bill Williamson 05:54
Right? Well, and if you go back to the the past decade, where there were actually job descriptions that you could find these companies, the tech companies were hiring writers to write stuff that they wanted to sample. So they're, you know, they're, they're conscious of the criticism, you know, that if you feed popular authors, if you feed published authors into the system, that that's going to lead you down those pathways. It's, that's That's it standard. And so they hire a bunch of writers, but what are they sharing for sharing, academic writing? And those are the those are the standards that are being established in the system. So yeah, I mean, it makes a lot of sense. And then for her to make the transition, she recognizes that graphic designers are compromised, potentially, by being used to having their artwork fed into the system. It's interestingly, she doesn't think of it the same way with writers. At least she didn't say it,

Bill De Herder 06:49
Some graphic designers have taken on the rhetoric of of calling something like Dali, or mid journey, you know, these these generative AI systems that create images instead of text. They talk about it like, Oh, it's my paintbrush, and my my prompts move the paintbrush, right. But I think, at least at this point, at time of recording, who knows if this would change, robots are not allowed, under US law to own things. You can't use this and then copyright the output.

Bill Williamson 07:21
That'd be an interesting set of challenges that we will see down the road like, well, will that line move? Will that boundary between the human and the in the algorithm shift when it comes to creative control or creative recognition, rather.

Bill De Herder 07:39
Right? I mean, if you took that to the furthest extreme, if you used Microsoft Word to write your novel, you couldn't copyright it, because it had a spell checker,

Bill Williamson 07:47
Right, you know, goofy example that I'm going to throw out here. So what if I set up a rig where I connect, say, five or six painting brushes to my power drill, dip them in different colors of paint, and then I just aim it at a canvas and pull the trigger? Because I'm using the tool, and I'm not really wielding the toothbrush, toothbrush. So paint brush, there's a new device, yes, for brushing your teeth with a power drill. I'm not sure that our dentists would like that very much. But you know, we wouldn't criticize the tool in the same way. But it's also like you can make a comparison between a power drill and generative AI. And at the point where my warmer my power drill becomes AI driven. I think it'd be to stop using it.

Bill De Herder 08:39
Let's move on to our next student.

Student 3 08:44
Mostly, when I use live chat, GPT, it's for essay writing. I won't just like copy and paste whatever essay gal that I'll like, copy and paste the prompts. Say, follow the instructions of this prompt, write me an essay. And I'll use it as like an example essay to kind of like build off of, it's also helpful to be able to like plug it back and say, did you write this just check to see if, like, if someone puts it back in will they tacked it because at the end of the day, I felt like I'm trying to like get away with cheering. It's more of I'm having this together ideas. I don't want to sound too similar to like what they're writing. Historians don't know everything about history, but they know how to find the information and by using a card takes the whole effort of finding out of the process.

Bill De Herder 09:29
Okay, so, we had a history major, he begins by talking about how he uses AI. So, he starts by describing putting just putting the entire assignment prompt and seeing what comes out and then working from there.

Bill Williamson 09:42
Right, the treating it as as your rough draft.

Bill De Herder 09:46
And then he also describes putting what he is then created back into the system and asking, right, asking the owl did you write this in order to sort of like do a sort of like A grassroots AI checker thing, and then adjust it to make sure that I guess an instructor couldn't come back at him for it. That's very interesting. He's using it to model I suppose the genre moves of the assignment. Right?

Bill Williamson 10:19
Yeah, you know, he's conscious that somebody is going to be looking over his shoulder and checking his work, so to speak, you know, Is he is he relying too heavily on a tool is he relying at all on a tool, of course, that depends on the individual instructors, the kind of policies that people are, you know, responding to, this is an interesting one, because, you know, both of us in our classes, we teach our students to engage in the process of writing, all the way through from idea development to final draft, using the tools in appropriate ways or in, in ways that are going to help, but we always want the student to be in the driver's seat. Where this is more, it's more like, you know, that little paddle with the ball and the rubber band. It's a little bit what it feels like here, where, okay, I'm going to see where this goes, I'm going to try to keep this on the paddle. But it's it's this back and forth, you know, relationship or exchange between the writer and the tool. But where in this particular case where where this goes different from what we try to recommend in our own classes, he turns the tool and responds to its output, instead of beginning to develop an idea and then using the tool to continue it. So so he starts from with with the ball on the wrong part of the you know, I don't know if that's ball in the hand of the ball on the pedal, because now my metaphors breaking down. But you know what I mean?

Bill De Herder 11:46
Yeah, yeah, it's interesting that he wants to use it, to model the genre conventions around an assignment by just dumping the entire assignment prompt, no, which is not what we recommend to do. And the system, as we know, through testing does not have a very complex, nuanced sense of genre anyway, particularly academic genres, which are highly discipline specific. So he's not going to get anything super sophisticated out of these systems to work from. And then on top of that, he's feeding it back into the system, because he's now feeling paranoid about whether or not he's going to be caught and accused of using artificial intelligence. So then he's asking the system for advice on on how to alter the text output. This wasn't in the recording, but he was also describing, trying to use it for figuring out how to cite things in Chicago style. And already the system was just not doing that, for him in format wise is too complicated for a system to reproduce. It requires like footnotes and stuff, it's just not supported in that right open text box thing.

Bill Williamson 12:54
Well, in this process, when he takes the approach of putting the whole assignment, and actually demoed with one of my assignments in one of my classes, where we did that, and we saw what the results were. And predictably, at least from my perspective, and this is what the students are not able to predict. And they're not necessarily always equipped to assess either. So this is one of the reasons why I did it in my class, you get a reasonable outline, you get a reasonable progression of thought processes that you can engage in, be get a topic sentence with a lot of gibberish, followed by repetition of the of the of the topic sentence. And so what it basically gives you is an outline. And if you then work to develop that outline, you might come up with something that's meaningful, and, you know, fits the context of the assignment. But if you rely on it to generate all of the ideas, you're you're you're not really getting to the point where your your ideas are not going to carry the day, the ideas that are there are probably going to be the most simplistic and the most obvious ones. And what do we value a lot in writing? It's that examination of something that's different. It's looking at the conversation and saying, how do I join the conversation? So all this is doing is repeating the conversation,

Bill De Herder 14:13
Right? Because it functionally, as a system of automation only does the most superficial, predictable tasks, right? So if you're trying to create new knowledge, it's going to fail. The other really interesting thing that comes out of this history student is that while he's using AI in this particular way to generate whole products that he can then manipulate, right? He's lamenting the glory days of research right before these systems existed, right? Which is really, that's interesting.

Bill Williamson 14:50
Well, and there's there's another interesting cue Yeah, because in again, take it back to my classes, where I teach them how to use a combination have a Google search, a library search and an AI search to look for resources that are relevant to the conversation they want to have. And of course, they get very different results from all three of those right? In in those results. So the differences in those results are actually very, very useful. And then for when, when you're using AI in, you get back a list of however many you asked for, you know, by default, it's usually 10, or 15. I usually ask it for 25, or 40, or I don't know why, but I do. And when those things are available digitally, of course, you can go get the text, you can feed them back in. And you can say, Hey, will you summarize this for me, and it's going to tell you what that article is doing or what that that book or whatever is already slick, and the whole thing is in digital format. And then you can decide, hey, this is one that I want to continue using. Right, keep moving on. So you can still shortcut the research process with AI in productive ways that are useful even to advanced colors. But at the same time, if you're just, if you just throw it out there to see what happens, and you're just trusting what it's going to give you. You're giving yourself a really, really narrowly defined and limited scope of perspective.

Bill De Herder 16:11
Right? The way I often talk about the results that you're going to get out of an LLM, when you're when you're talking about research is that it's summarizing an internet search that it did for you. Right. So it's literally stuff that it can find just floating on Google Scholar. And if you're lucky, it hasn't decided to just relax all of those elements right into something that doesn't exist, right?

Bill Williamson 16:37
Well, yeah. So that is one of the interesting things that's happened to is that I've had it spit back URLs is locations for things that don't exist there. And, of course, it makes me wonder if it's giving me bad summaries of things. But of course, if I'm using it as a research tool, then I'm gonna go read those things anyway. That's the idea behind it, I'm just helping it or asking you to help me filter.

Bill De Herder 17:02
But there's so much stuff behind a paywall that I can actually see. Yeah. And the library databases have access to all that stuff behind the paywall.

Bill Williamson 17:11
And so the but but so here's one of the really good things that it does. So in my demo that I did in class, most recently, I had it looking for stuff that was related to environmental issues. If I go into a library search, I'm not going to get something from the EPA, right? Not from their website. Anyway, I'm not going to get something from a political action committee, or a nonprofit organization, unless it's in a published source. But I will find those things on their websites, wherever they may be. And you get a variety of responses even from that same organization. And so if you find an organization that you like, you can go look for resources that that organization has posted to their electronic spaces specific to the topic you're looking for. And library is not going to give you that information, right. In a Google search, you might stumble across it that way. But it's hard to make a Google search targeted in the same way that you can target AI for something like that. So there are definite advantages to each way of taking a look at something like that as a process.

Bill De Herder 18:15
Yeah, I'd say, overall, the way this student is speaking about the use of artificial intelligence, and the effect on the field of history, right, he's losing sight of the human role of the human in this AI interaction to such an extent that he doesn't see history as a culture based argument. Right? He sees it as basically you can look up the answer to a math problem, right? But all history is argument.

Bill Williamson 18:45
It's all constructed.

Bill De Herder 18:47
You're creating a narrative out of bits and pieces, right?

Bill Williamson 18:51
So the here's a great example, though, of a student who probably has, has come to this place largely on his own or in collaboration with peers like himself, who have a partial understanding of the processes that they're engaged in, because they're still students. This is one of the reasons why it's a fantastic idea to address these things in class in to talk about, okay, what are your impulses? What are your What are your assumptions, and then to begin interrogating those things and to begin refining the thought process that is behind that kind of engagement. And then to refine the process itself. You give people the right mindset, on how to engage with the tool, and then you give them permission to use that tool. They're going to find ways of being really effective and innovative with it. But only if they have a means of assessing their successes and their failures and their in betweens all the way through that process. That's where we come in. If we aren't willing to do that. We're not helping them gain the bigger understanding of their discipline that they can get from the effective use of it. tool like this.

Student 4 20:02
That's been my greatest success with is just putting things in and having it spit them back out to me and Danny I, I never used them as it is. But it's helped me write like, massive papers, I've created ra stuff with it, like art, like I have to do programs as an RA. It's sometimes like, can you tell me like a lot in this program that would be fun for freshmen. And then it will be like, oh, you should try this idea or try to said yet

Bill Williamson 20:26
So this person introduces the idea of using AI to generate big papers. And we don't have any context for that. But let's talk about that just a little bit. Because this builds off of what we were just talking about with the history example, where if you just feed a prompt in, you're gonna get something out. It's not going to be amazing, it might give you that framework might give you that outline. But that's if the ideas if the if the, if the project in question is idea driven, if it's argument driven around ideas, specifically. So I've had students it was some of the classes that I teach are technical or professional, meaning that so for example, report writing, one of the things that I've done for years, and my report writing classes to help people do original research, they're gathering their own data. Because that way, they're learning a problem solving process. And then the writing is communicating about that problem solving process and becoming a problem solving process itself. But it's also because it's so data driven. And because the data is all original, it's the kind of thing that if you feed it into, if you feed it into the AI, it results in the most wonderful garbage you can imagine. Case in point, I did have a student right after the boom, last year, last winter semester, first big project is coming up, they did a little bit of a panic thing. They were they had fallen behind. And it was very obvious that the draft that I saw come in was generated by Chet GPT, or one of those tools, because it was phenomenal garbage there was it did not even understand the prompt because it can't, it did nothing. It produced almost no content in a 10 to 12 page report that had any bearing whatsoever on what the prompt was actually asking for. And it was clear that the student either figured, hey, it's a draft, I'm just going to submit it. And it's not going to matter. Or they were so not invested that they didn't understand the assignment and couldn't assess how the AI did not understand the assignment. Right. And this is the real danger. So, you know, when when people talk about when professors talk about their fears about students just using AI, this is the kind of stuff that they're talking about. But my my question then is, if you're assessing it as a piece of writing, and it's turning out garbage, what is your conundrum? If you just if you take them at face value, you don't even have to accuse anybody of anything. So this is your draft? Yes, this is my draft. Okay, I'm gonna assess it as I will other people's drafts, right? Guess what, it's not going to fare very well. Right.

Bill De Herder 23:24
So, I mean, so these systems don't understand what words mean. They have like such a narrow, narrow, narrow understanding of what we could consider meaning to the and all students function as theorizing beings, and valued by theorizing beings, I mean, that they are interpreting the world around them. Right, right. So you have some sort of vague ideology, philosophy, and you're applying it to emergent situations, right? And that's what I think about when I think about creating new knowledge through classroom assignments. Right. So if your assignment is just give me the history on a particular topic that is freely searchable on the internet. Yeah, it will start dumping out loads of text, it definitely will

Bill Williamson 24:15
Right. And some of it will hit, you know, be good. Good enough.

Bill De Herder 24:18
Yeah. So even something as simple in my English 111 class as assigning a genre analysis as a early assignment, okay, because of the way I did it. I don't think the LLM 's were able to touch it. So I entered I teach them about John swales is move structure. And, and I talked about this move structure analysis, you have to look at a series of document types and define your own move structure, and then prove to me that moves structure with examples from these exemplars.

Bill Williamson 24:53
Yeah, so you're applying a very specific filter for thinking and for analysis to it. An examination

Bill De Herder 25:01
and it's built on a really abstract idea. It's a theory from John swales about how you know, here's how move structure works, you have to define it for me and explain it for me, and then you have to apply it to any genre of your choosing. I still had one person submit a history of country music, which is something that machine could do, but it was not a genre analysis. And I told that guy, he had to write it again.

Bill Williamson 25:29
Yeah, and, you know, even though we're laughing about it, you know, what are the differences between Bill and I, and some of the people that we've talked to is that when we see something like this happen, we're not necessarily judging the student. We're wondering, what led you to this pathway, you need help you have bigger issues than my class. If this is your your strategy for success in your that you have so little investment in interrogating the results of that process. So, you know, that becomes a teaching moment in a very, very different way. And, and I think that's one of those things where that's more about doing school, it's not really about doing AI, that's about, you know, how do you how do you map a desperate act?

Bill De Herder 26:18
And me, me as the instructor, I am not riddled with anxiety trying to figure out how to police right the system? And how do I approach the student and deal with the emotional blowback, that's just naturally going to escalate? Right, don't get caught up in any of that, I'm just like, this was not the assignment, write it again.

Bill Williamson 26:40
And, you know, the flip side of all of that is, so if somebody isn't invested in an assignment, there's a lot of potential reasons for that. They don't find it very interesting. They're not sure how to start, they haven't been paying attention, you know, they're distracted, whatever the case may be. But even if it's a really fantastic assignment, and everybody else is engaged, you know, it's still, it doesn't give you a basis for judging this one person, right as this small handful of people. And that's where, again, careful integration of the tools in a class context will result in better outcomes than what that one was. Right? And no great didn't know you were trying to teach those kinds of things in class, you're trying to get them to, to engage with the technology and to see its its capabilities and its limitations. And so you've still got somebody who's disconnected from that?

Bill De Herder 27:34
Well, I mean, literally, we spent three weeks Yeah. And my intention was to teach them like small steering maneuvers, yeah, to get small pieces out that they could layer in to a draft, right. But that still takes time. But that's, that's, that's not getting an entire paper out on a history of country music. That doesn't address the prompt at all, because the machine can't grapple with the particular things I'm asking it to do.

Bill Williamson 28:01
Right. And so I guess what I'm trying to say with all of that is that the people who just rely on the technology completely, and they aren't really doing the assignment at all, they really are the outliers in a class like yours. So you're in a class like mine. And they shouldn't be the basis on which we judge the tool or the basis on which we judge all students. That's true, you know, and they, by extension, they shouldn't even be the people that we use to judge our own assignments. Because you're not really getting the kind of data back for that you need to and if you treat them as outliers, just look at the rest of the people that are doing the assignment. That's where you try to learn how to tweak and how to revolve.

Bill De Herder 28:44
Yeah, you got to you got to steer clear of, of getting just squarely sucked into the moral panic of the moment.

Bill Williamson 28:52
Yeah. That's a good yeah, that's, that's sound advice.

Student 5 28:58
Even if I like prompt it with something, it usually won't give me something useful back, it'll be general and not very specific. But it might give me like an idea or a snippet that I can kind of be like, okay, cool, I'm gonna do research on that, or I'm gonna write something on that and actually come up with something on my own. I talked to someone who's doing cybersecurity right now. And I was kind of asking her about how AI is being used. And this is like, different from my major, because I'm more into the technical writing user experience stuff. But she kind of was saying that yeah, people are starting to use AI to get better. And however, businesses are starting to pick up on AI because in order to combat that they need to use AI blocking students from using technology is just going to lessen their confidence and those technologies when they have to use them or when they just ultimately get left behind

Student 4 29:50
Currently being Teacher Education. Like my host teacher right now. She hates chat, she can teach she's like, I don't want them using it. I don't want them touching it. Like a lot of people though. See it as kind of Like this bad thing, but I feel like it's so underused. Everybody's talking about how there's this gap within education within high schools and middle schools, but like, how are we working towards fixing that they're not like, that's just the honest truth, we're experiencing students who are in eighth grade reading the three little pigs because they can't read and that till because they can't read any higher than that, which is sad, why are we using something like chat should be to help them and to help assist them by creating assignments, creating ways for them to be better writers, because if we're dealing with that limitation of education, then we should give them the resources to enhance that, and at least some aspects.

Bill De Herder 30:36
What I think is really cool. In both of these responses, we have a technical writing student that we have both worked with for a while with artificial intelligence, her name is Sam. And Sam gets at a really, really important point about the complexity of the material that you are working with, and attempting to produce. Right, it you know, this is going to be a moving target with how these systems are trained. But let's assume for the moment that maybe LMS can read things at a sort of first year, college student level, and it might be able to produce writing of that sophistication as well. Right. So maybe that's the sort of the tear that it's floating Asher, Sam is a senior, Sam identifies that, you know, she can pull something out of an LLM. And it will be super general, there may be some kernel of usefulness, right that she can riff on in there. But she recognizes that, if you're asking it directly for some sort of transition sentence from one idea to another, it's going to be so generic and, and general that it might not even be useful to put in your paper. Right. And that's even with like a very pointed, direct, small thing that you're just trying to get out. Right. But then August comes in with the flip side of that. Right, right. And this education student, August, he talks about the people who are below that sort of complexity threshold, and how can we use AI to bring them up to right, right. So I think both of them are sort of speaking to a skill gap. In one case, it's, the system has a skill gap with you, right. And in the other case, you have a sort of education divide digital divide with public school students who may or may not have have experience with digital systems, like an LM, but also may not have really good literacy sponsorship, you know, people who read to them people who took them to the library, you know. So there's that skill gap as well. And how can we bring humans up to the, the skill of the machine?

Bill Williamson 33:05
Yeah. And you listen to somebody like Sam here talking about how the tool has its place, the tool has its purpose, but the tool is not as good a writer as she is, right. And this is something that it echoes what I say all the time that I am yet to have the computer generated a piece of writing that comes anywhere near the level of what I can produce. And that that feels, I actually had someone tell me that that sounded very arrogant. I said, Well, no. If I can't write well, as someone who is 30 plus years into a career as a writing scholar, and as a writing teacher, then I'm deficient. I'm the problem.

Bill De Herder 33:48
You know, yeah, you have had so many decades of refined training in the nuance of a particular term over another.

Bill Williamson 33:58
Yes. And if I cannot write better than a computer than I need to hang it up and pick a different career, something that doesn't involve words, probably,

Bill De Herder 34:08
maybe paint brushes and power drills,

Bill Williamson 34:10
There we go. It could become the next the next, you know, Nish artist. And

Bill De Herder 34:14
I find that that me I am so picky about the things that I write, yeah, I would be, I would just be disappointed with anything that I got out of this this LLM system, I'm habitually disappointed. Some things are workable, and if you if you're in a particular situation where you really don't care, right, or really doesn't matter, or really is that predictable, and the the terminology doesn't matter that much. You don't need to apply your brain to it, then maybe you could use the system for that. Well,

Bill Williamson 34:43
and then flip it around. Like you said, you know, August talks about the gap in the other direction, which is someone who doesn't have the practice thinking someone who doesn't have the practice writing who doesn't have the practice reading sophisticated argument. And, and they're still in the process of developing those skills, maybe even very, very early in the process of developing those those skills and that kind of knowledge, you know, here is a great opportunity to invest in learning how to use a tool, that's going to show you some direction. Now, that means putting a lot of trust in the tool and the tool of need happy that trustworthy. However, the more simple the arguments, the more simple the documents, the more simple the things that are being presented and examined, the less margin there is for error, you know, you don't have to worry quite as much about a simplistic thing being misinterpreted. You know, it's the really sophisticated stuff that really data driven stuff that that you got to worry about more. So is it possible to teach people literacy skills through generative AI? Absolutely. And if people were to recognize that, in a more broad spectrum kind of way, it would be a different attitude that we developed toward it. And in fact, if you look there tutoring systems, I'm iski, the name is escaping me of the one that I just read about not too long ago. But it's essentially an AI driven tutor there, where the students can ask it questions, and it will answer those questions. And they're being trained to ask questions that are specific and narrowly defined, or show me an example of how to do this, it'll walk them through the solution for a problem, step by step. And they can do it over and over and over and over again, if they need to. And if they've got the patience for it. And if they got the focus to do it, what a fantastic learning tool to do something that the human doesn't have the patience to do.

Bill De Herder 36:37
Well, not that anyone cares about this. But here's some writing center lore for you. A number of years ago, there was a PhD candidate out with UC Berkeley, who created some software and the company was called write lab. And he was all over the International Writing Center listserv selling the software to writing centers. And the idea was that it would scrub through a document and just produce all of these little comment bubbles on the document automatically. And then the tutors job was to sort of filter it. Oh, that's interesting. Yeah. So the tutors role effectively became like mediating the technology, and then sent just passing the draft on to the student, right. The Writing Center community didn't really pick up on this least not from what I saw. And I think eventually, Oh, would you like to guess who bought it? Oh, it was Grammarly. Grammarly bought write lab and integrated that technology into their stuff. So Wow. Yeah. So this this sort of like automated tutoring thing? It's been around for a while.

Bill Williamson 37:46
Yeah. And, you know, this is one of the things that we said earlier that there really isn't a whole lot that's new here. Right? It's, it's just, it's a new level of development for some of the tools in some ways. But this kind of tinkering and tinkering is has been going on for a long time. And it, I've made the argument multiple times that what it's shown us actually is the habits that we've fallen into in education, that are time saving for teachers, and not necessarily time saving for teachers in a way that is still invigorating for students, if you can make both of those things happen. That's a that's a big accomplishment. But we've, we've dulled students interest, and saved ourselves time, and in the meantime, are in the process, I should say, have created an environment where AI can can now do the homework because it wasn't that challenging to begin with, right? This discussion boards, articles, summaries, things like that. And there's nothing wrong with those activities. They're actually really useful, lower level writing activities. But you know, just like anything else, you know, if if, you know, to use the parallel of the calculator, which we love to use, if adding and subtracting small numbers is easy to do, why not let the calculator do it? You know, if they if the activities are really low level, why not let the the machine do it? Or instead turn the assignment into Okay, the machine did a summary of this article. Is that a good summary? Or is it a summary that could use some tweaking? Even that's a step toward a more sophisticated writing assignment and a thought assignment? Is that amazing? No, but it's a stepping stool. You know, it's something that gets you to somewhere else that you really want to get to. And it's a it's a place along the way.

Student 6 39:40
I guess I don't know how I feel about using AI. Like almost like morally like it's, I know it's not copying someone else's work, but it feels like maybe taking the easy route.

Bill De Herder 39:52
There is so much ideology laid into that one statement. Like there's so much anxiety around and whether or not it's cheating, or it's even morally correct, to engage with the software. Right, right. So is using an LLM cheating?

Bill Williamson 40:15
So this is this is where it depends, it would be an answer. But there's a part of me that says, No, I think using an LLM to do an assignment is more like, like I was saying before it's revealing that you are lost. And you know, that then take that around the corner. Why do people cheat? Because the last, you know, they're reaching for something because they don't have an answer. And they're reaching for something because they are desperate. And, you know, is it? Is it always laziness? Now, is it sometimes laziness? Sure, why not? Does it make them a bad person? No, it means they're not really engaging with the world that they're in. And that's disappointing, I'm sure for everybody who's a teacher who looks out and says, Okay, everybody's falling asleep. Oh, I see a stream of withdrawal right there. You know, are we? Are we doing something meaningful by accusing them of cheating and holding them accountable for cheating? Not really, what's the learning moment that happened there? So I think us accusing people of cheating, and judging them morally for cheating, is just as bad as reaching for the tool to cheat in the first place.

Bill De Herder 41:36
I think from the student perspective, the moral panic around artificial intelligence, for me is largely driven by our deep seated cultural obsession with with the individual's capacity to reshape their own environment, pull themselves up by their bootstraps and compete on a global marketplace. Right. And certainly, certainly, that is also reflected in the instructor perspective, where, you know, whether or not the student did their own hard work and all of that, but there's also like a little element on the instructor side, in the moral panic of, you know, it's the timing as well, because we just, we just got over the pandemic where we had to flip the table over, and flip everything online and completely change everything that we were doing. And that was exhausting. And it was awful, right? And then here, we have this next thing just a couple years later. And now it's asking us again, it's this inciting incident that just makes us that tells us Oh, we got to change everything all over again. Right? Yeah. And that's exhausting to deal with. And there's all these implications about whether or not it would have repercussions in the academic job market, right? So there's, there's all of those anxieties, but then also, underlying all of it, like the sediment on a riverbed? is like this, this moral panic around Did you do your own work? Are you working hard enough? Do you deserve this grade? Right? The reality is that nobody does anything by themselves. We are all interdependent. We have all learned from other people. And every time we do a project, we are doing it if we're doing it well. We're doing it with the perspectives of others.

Bill Williamson 43:22
Yeah. And we're always informed. You know, one of my professors in grad school had a great phrase, not his own, ironically, or maybe not ironically, in this context, standing on the shoulders of giants, ya know, we're always those people who've come before us that have taught us things that we've learned from and they shape and influence us.

Student 7 43:41
I even use it when I'm not writing for school to like, if I'm writing emails, like, I'll definitely put into chat GPT like write me an email or this I will take a paragraph out of my paper and put it into chat GBT. And just put make it grammatically correct. I don't know, Ron talks about how I was gonna take my job, but I'm not really worried about it, because it can't, they already tried to do that in the 70s. And it didn't work at all.

Student 8 44:03
So I was saying what AI is not going to take over people's jobs. People using AI are going to take over those jobs, though. You're putting yourself how to utilize it. Exactly.

Student 7 44:13
Yeah. Because like, I want to be therapists so like, it's definitely more people based and more personalized. I think it's an incredibly useful tool that can be being with easily mishandled. Well, artificial intelligence only knows what we teach it. So if we teach it something that could be used for quote, unquote, nefarious beings, then that's all it's going to know. Because it doesn't have a concept of good people. So it's really important to be able to teach artificial intelligence I don't want to say quote unquote, good things because I don't think you can teach artificial intelligence, necessarily morality, but at least making sure that there are certain rules in play to make sure that nothing bad has it and then I guess yeah, um, I don't know a ton about artificial intelligence. I know that it's like a super powerful tool, right. But other than that, I'm not very well burst into like the inner workings of it. So I'd have to do work search. And I think it could revolutionize the speed at which because like I'm going into disease research that it could most likely revolutionize the speed at which we can discover new things about diseases and exponentially grow our knowledge at a super faster rate, it's just a matter of what humans can actually handle versus that giant database of knowledge.

Bill De Herder 45:51
So here we have we have a collection of students, who are all healthy skeptics, and yeah, and they're seeing it as a tool. Right?

Bill Williamson 46:01
In the in the comment about outcomes only being as good as the input. Yes, absolutely. You know, this echoes things that we've been talking about for a while.

Bill De Herder 46:09
Yeah, I really appreciated the psychology major. Two points that she brought up that I thought were really cool. One, she's talking about using it to apply for jobs, like professional documents, professional documents are so standardized, there's some predictable, and they're so superficial, that yes, I think the machine can help you there, right. And then the other thing is that her job like after she gets the job after she applies, like the LM, she's thoroughly convinced it won't be able to perform the sorts of role, the sorts of roles that she will be performing as a therapist, right? Because it's all emotion and emotion has all meaning.

Bill Williamson 46:50
Yeah, and you put that in the context of the pandemic, and there's the tool that people were using, that is an AI generated, counselor, so to speak, but it's really just, it's a bot that echoes your input back to you right in it. But it's the feeling of someone acknowledging you, and validating what you're feeling, right is what people were taking away from that. So it's the most simple, most basic level of therapy that you could possibly get validation, and someone, someone just taking the time to listen to you. Okay, so what have we talked about that this is, you know, simple things, does just fine. The more complex it gets, the more layers you add on, the harder it is for the computer to keep up? Well, of course, the more impossible it is for the computer to keep up with that process.

Bill De Herder 47:43
And it's not like these machines can be trained on Super firewalled rare case notes, right about what actual advice was given, and what were the actual outcomes and all of that, right speaking as somebody who worked at a medical school, as a writing specialists working with med students inside of an actual counseling department, right, which is a very interesting structure to exist in. I know that these therapists, they come together, and they share notes, right. And so when you're getting access to a therapist, who's a human, you're getting access to a hive mind.

Bill Williamson 48:19
Yeah, a team of therapists and in fact, even though you only know one of them,

Bill De Herder 48:22
Right, and they all have, like, deep, deep understandings of meanings and emotions. So, so I just, I just thought that was so cool. And we also had other people talk about, you know, the, the potential perils of using artificial intelligence, because you train it on particular data sets, and therefore it, it then operates in particular ways based on you know, the feedback it receives.

Bill Williamson 48:50
Yeah, I mean, the comment about morality that you can't teach a machine morality, just like you can't teach it empathy, you can teach it things, you can feed data into it that approximates morality that approximates empathy. But it's not human level, right kind of engagement. And you're only going to get output again, that's, that's as good as what you put into the system, what you teach it to do, or what you teach it to know, and how you teach it to think. And that's a long, ongoing process, just like it is teaching one of our kids or something like that. Although hopefully, my kids get are better than robots. What you don't hear in any of those clips are us responding and having dialogue with people which was fantastic. We got to have some wonderful conversations. But it was just a really nice way of connecting with the, you know, whoever stopped by getting a sense of the pulse, if you will, of of our campus, thinking about these kinds of tools. This is Bill from Bills and Bots,

Bill De Herder 49:54
This is the other Bill from Bills'n'Bots.

Bill Williamson 49:56
you just want to make sure that you are listening in Looking for new episodes and we'll be back with another one as soon as we can

Bill De Herder 50:04
And ciao.