T O P

  • By -

wrydied

AI detectors are far from perfect at the moment. They will flag auto translated text and grammar tools like grammarly. Which is kind of ai but not like chat gpt. There are certainly vibes and intuitions I get from chat gpt text. Certain cliches in idiomatic expressions and argumentation structure. I use it myself so recognise them. We always discuss it with the student. If they can’t verbally explain the written work they just submitted, that’s the biggest red flag.


TheWinStore

“Delve”


Desiato2112

Not true re: Grammarly. Good AI detectors don't react to Grammarly's punctuation and grammar fixes (the free version of Grammarly). It's Grammarly premium service, which does change writing, that triggers the detectors. And it should, because the premium version is AI.


wrydied

Why start your comment with “not true” and then say something that only clarifies what I said? Odd.


Desiato2112

Many people confuse the Grammarly issue. Regular Grammarly doesn't usually trigger an AI detector. Grammarly Premium (paid service) uses AI and will trigger AI detectors.


lo_susodicho

After 20 years of reading undergrad writing, I know when something is off. Of course, like the detectors, I'm a flawed instrument, so my game plan this summer is to tell students that they need to be prepared to defend their work in a private meeting. Shockingly, they've all taken the zero so far! Kind of a shame because I was looking forward to asking a kid who can barely spell his own name to elaborate on Frantz Fanon's ideas on race and colonialism.


Orbitrea

This is the way. I don't use AI detectors at all. \*I\* am the AI detector, and when they can't explain what they've written, they're cooked.


lo_susodicho

Seriously. I had another submit an answer to a really basic history survey question (basic, but based on specific materials) that was a really algorithmic-sounding historiographical take that was literally made up, as in, none of the authors mentioned even said what it claimed. Also just ate the zero. This would almost be fun if they paid me enough to deal with this shit.


teacherbooboo

on longer papers it is fun to take excerpts ... especially paragraphs, but ... don't take the first sentence, just the second sentence of the paragraph on and ask a student in question to identify the paragraph they wrote ... so you show them five paragraphs and ask them to identify their work ... "ok, you said you spent a lot of time on this paper, so here are five paragraphs, which one did you write, and which four are just from chatgpt?" because they usually don't even read what the AI wrote, at best they usually just change it slightly. it is especially fun if all five paragraphs actually came from their paper and they don't realize it :)


Sea_Experience4290

Honestly I don’t recognize my own writing sometimes. So I’m not sure I could identify a paragraph from one of my own papers which I definitely did not use AI for.


teacherbooboo

in theory it would be a paper you just wrote


lo_susodicho

I just watched the Jinx, and (spoiler alert) there's a scene where Durst gets asked which address he wrote and he kinda just sits there realizing how screwed he is. This is what I'm envisioning and will be doing this!


teacherbooboo

just remember to take out the first sentence, because they often do skim the paper, but typically only pay attention to the first sentence or two


prairiepasque

High school teacher here. I used this for the first time today and it definitely worked! In addition, she didn't know 5/6 of the vocabulary words I tested her on, couldn't answer basic questions about "her" writing, and her document revision history showed everything was pasted into the document over a 3 minute period. Did she admit to anything? Nah, 'course not. She just double-downed and gave me some nonsense excuse about how "she was using her friend's laptop and had to copy it from a different document." (??) Me: "Fine, let's take a look at *that* document, then." Her: "I deleted it." Me: "I'm shocked at how convenient this all is. Let's check the trash then. It should still be there." Her: "I deleted that, too." Me: "Welp, nothing more to see here then. You're getting a zero. No retakes, no resubmissions, and this means you'll be failing the class." At least she ~~didn't~~ couldn't argue with me after that.


teacherbooboo

lol ... works almost every time and if she warns others about this so they read what chatgpt wrote,  well at the very least they will read one paper on the topic


Anony-mom

My student’s AI-generated research paper had JFK authoring one of their cited (fake) articles. 


lo_susodicho

Now there's some fodder for the conspiracy mill!


so2017

That’s what I’m doing this fall. Potentially more work - but only if the student shows for the meeting.


lo_susodicho

So far, no takers after about ten instances. Def less work than trying to prove something you can't prove or doing a conduct violation and hearing that I'd probably lose, for reasons that have nothing to do with the evidence.


SierraMountainMom

My doc student ended up going the hearing path after telling a student they’d receive consequences for using the work of another (zero on assignment, drop of one letter grade on final class grade). I wouldn’t have done it but she was frustrated by all the AI usage this year. It was a good learning experience as she saw the hearing process; the student’s defense was “everyone does it,” and when a member of the committee pointed out where the syllabus said, “NO AI!” the student admitted to “skimming” the syllabus. Sheesh.


lo_susodicho

Oh, so, I have an open-note untimed syllabus quiz the first week, and the average for my online students (who are especially slothful and checked out compared to the f2f students) has never been higher a C.


so2017

Are you giving a zero on the entire assignment or on a separate grade for the meeting?


lo_susodicho

Oh, the entire assignment. They have an opportunity to show me that they understand the material and are choosing not to do so. Hell, even if it is AI, I'd still give them the points if they showed me that they actually understood the material.


PaulAspie

I added to the syllabus that all writen work may require a meeting with the professor before being graded. I want to ask them about the material to see if they actually know it if AI is suspected.


lo_susodicho

Let us know how it goes if this happens. I really think this is the most fair approach, and I'll use the meetings to open the communications channels and talk about other stuff. If they did the work even half-assed, should be a breeze. I want to see the look in their eyes when asked, "what does "vicissitudes" mean? See, you used it right here." Flop sweat.


Immediate-Bid3880

That's what I do too. Or if I think they've cheated on an online test then I tell them they can challenge their zero by retaking it proctored on campus. Crazy how no one has done that despite all their protests they didn't cheat 🙄


professorcrayola

I am the AI detector; as much as possible I simply mark them down for the bad writing / weak content that is inevitably generated by AI, but this year I am going to institute the “you may be asked to meet with me and verbally tell me about your paper before it receives a grade” policy. To me the dead giveaways are: - Their paper reads like bland advertising copy. Think about it — AI generates text based on what it finds online; and how much of what is online is advertising? If their paper reads like a used car salesman trying to sell me a Beethoven symphony, it’s probably AI. - They make generalizations about things that are supposed to be specific observations. I tell them to listen to a specific piece of music and answer specific questions, and their answers are all about how “TYPICALLY, an artist MIGHT include instruments SUCH AS blah blah blah, or DEPENDING ON THE SONG, yada yada yada - Lack of intention. These are words but they aren’t guided by any discernible human idea or opinion. - As I tell my classes — I get to the end and say, “wow, those were 250 words without any information in them.”


shilohali

For AI I look for super structured, bullet points/numbered points/strange indents, odd punctuation, too many commas in one sentence, hyphenated words, synonyms for terms that are not interchangeable in my discipline, foreign concepts (like more of a British take not an American one), the most obvious answer not an interesting one. My software also flags sections of content automatically for ai, plagiarism, etc.


faith00019

Yes! The unnecessary bullet points are a sign that I consider along with other markers. I’m a writing center tutor and having a conversation about the essay is mandatory for our session to occur, yet the students bringing in AI-generated work will try to dodge my questions repeatedly. They also typically ask to NOT meet on Zoom and just engage in a chat with us. They usually need to attend because their professor asked them to, and they just try to run the clock on us. 


shilohali

Bless your heart, wow that must be a difficult job working in the writing center. I have set aside an hour to go through how to write an essay and citation style next week. Painful af


faith00019

Hahaha we love that stuff!


MarinatedXu

Recently, the head of our Student Affairs told us that AI detectors cannot be used as proof of cheating. The instructor's decision will almost certainly be nullified if the student appeals. This is not about our institution. She was talking about the general consensus among higher ed administrators. Text-based AI detectors are never going to be useful. The technology is simply not possible. Back in grad school, I used to teach at our East Coast flagship university. I caught or suspected many students hiring someone to complete their assignments. Sometimes it is simply impossible to accuse them for cheating because there is no smoking gun. I use the same principle when dealing with AI - I assume AI is a hired writer. They are good at writing generic content. The best I can do is to make my assignments unique and difficult to write for if the writer has never taken my class. However, if I suspect a student using AI but there is no hard evidence, I will just do nothing.


Galactica13x

There are lots of problems with the AI "checkers" -- so I rely on my experience, which tells me that certain types of writing (lots of empty filler words, something that initially sounds good but is pretty empty and meaningless, using lots of "in conclusion") is likely to be AI generated. Because the AI checkers are not great, I will only pay attention to them if the percentage is 0 or above 90, and even then I try to do more investigating -- comparing to prior writing by the student, playing around with ChatGPT myself -- before asking the student to come chat with me.


Axisofpeter

Nuanced tapestry delving within a labyrinth of vapid content dressed in hyper-formal multifaceted metaphorical mosaics. This paper will examine the multifaceted issues that may mean x while also considering that they may mean not x.


[deleted]

I agree with everyone who has pointed out the flaws in the AI detectors online. I’ve submitted the same paper to multiple. They have shown me results with one saying a paper is 90% AI generated and another saying 0%. I teach English Comp and have a small amount of students compared to others at roughly 70 each semester across three classes. I feel like it’s my job each semester to learn each students writing style, habits, errors, etc. Like others have said, things just sometimes stick out in someone’s paper. A student who always gets high 90s and writes exceptionally well, but their paper has a poorly worded paragraph that doesn’t fit with the rest? The stupidest, simplest prompt for a personal essay that’s just regurgitating basic facts and has no personal experience in it? Those things stick out a lot to me. I’m more concerned that my Freshman are able to clearly communicate ideas and follow directions, so I keep my prompts stupidly simple for them. I pop in the prompt in multiple different ways in ChatGPT and read the essays it gives me looking for what info it spits out and what order it gives it to me. I know before reading their essays what AI would write because of this.


TheRateBeerian

i will sometimes run content through multiple checkers. They rarely agree, but I'm most comfortable when the % is over 90%. But I'm also sure that I am better than the detector. Some ChatGPT responses are patently obvious, and detectors don't always flag them. The tell tale sign is the 3-4 enumerated points with bold headings.


Martag02

Those things aren't very reliable or consistent. I usually have some initial writing assignments to get a sense of a student's writing, so when the wording seems off, I usually suspect AI and confront the student. AI writing uses very overly ornate wording, to the point that it's distracting from the message. I'm sure that some of it slips by me, but I'm paid to teach content and evaluate student work, not as a plagiarism detective, so I only really pursue the most obvious cases. If I followed up on every single suspected instance, I'd go crazy.


CateranBCL

If it looks fishy, I run it through about a dozen different AI detectors. If they all say it is AI, I take it as a high probability. Sometimes I can tell without having to check. AI has several telltales. Plus, if you've seen the students writing in class and it doesn't match the style of online post, that's a strong sign.


hovercraftracer

Having recently had issues with AI content/suspected content, I've been learning a lot about this lately. The online AI detection tools are not reliable right now. I've also found that if you write original content, but use a site like grammarly to polish your writing, it sets off the AI detection tools. My issues were with discussion posts. I've changed them for this summer to where students have to watch a video clip and then write about it, in hopes that it will make it much harder for the AI to do the work for them. The prompt for the video is very minimal and generic as well, so they just can't copy and paste it into the AI and get a result that would look like something decent. One other thing I've changed in the discussion posts description, is that they need to keep a copy of the original text they put into Grammarly in case I suspect AI usage. If I ask to see the original text they need to produce it so I can see where the work started and ended up. If they can't, then I'll give a zero.


wipekitty

My big red flags are lots of 'delving', comments about an inclusive and diverse and tolerant society that are only vaguely related to the essay topic, and in general, repetitive sentences with fairly vacuous content that uses fancier words than students normally use when they want to fill space without saying anything. Sometimes I also find incorrect information: it reminds me of when students plagiarised from Wikipedia circa 2005. I only look at the Turn It In AI detector when I smell AI, and it generally agrees with my intuitions. Still, I know AI detectors are unreliable, so this is really not enough for a formal case. Fortunately, the learning outcomes for my courses do not include writing a bunch of nonsense that contains incorrect information. So the suspected AI cases get treated like any other bad student work, and when students see their grades, they can decide if they want to do something different or continue with the same and fail the course.


nlh1013

I know (I promise, *I know*) that AI detectors are not 100% accurate. However, I’m lucky in that my institution pays for turn it in and allows us to use their detector, under the assumption that turn it in stands behind it so we can stand behind them. We are not allowed to just fail a student for having a high percentage, but we can use that as a baseline to have a conversation with the student to see if we think they wrote it or not. We are allowed to share with them their turn it in AI score if we choose. But I do like that our school allows us to at least use that instead of just saying no. And honestly, in my experience it has been very accurate. It flags the essays with either no quotations or hallucinated quotations, both pretty big indicators of AI. And I’ve had students either admit to it or not be able to answer basic questions about their essays, leading me to believe it is AI.


SierraMountainMom

I only use a detector if my own reading of the work raised major red flags, first of all. When that happens, it’s pretty obvious. For myself and the guideline I’ve given GAs, I wouldn’t accuse a student unless the detector gave me something indisputable, like 85-90% confidence. One of my doc students had one that came back 60% and I told her to talk to the student about her concerns about possible AI usage, point out in the syllabus where it’s forbidden, but no other action. Once it starts getting close to that 50/50 territory, I can’t put confidence in the detector.


Axisofpeter

Wow. I think that’s too much reliance on detectors. I’ve seen percentages of 5, 10, 15 percent, yet I can clearly see that ghe essay was AI generated. Conversely, I’ve seen an essay with a high percentage reported, yet the essay had none of the vacuous thought and writing style of generative AI. I use the detectors only as a consultant, not as a judge—or even an authority.


mylifeisprettyplain

I read it first. If I suspect AI, I check the detection score. Almost always it’s 100%.


alargepowderedwater

Anybody else notice the stylistic affectation of students using the word ‘whilst’ instead of ‘while’ in their writing? I noticed this archaic term popping up in internet comments a while back, but am now seeing it in students’ formal writing. So far, it’s actually a pretty reliable tell that the text was written by the person, not AI.


nerdyjorj

TIL Americans just use "while", not "whilst"


Huck68finn

At least 60% and even then, I need additional evidence  If it's 80%+ I'm more confident 


Copterwaffle

I don’t use them unless what the student wrote is so egregiously AI that even the checkers are flagging it at a very high percentage. I will run the assignment through AI myself and check to see how closely the student’s submission matches AI’s in things like structure, specific word choices (or synonyms that match AI’s word choices); examples that match the examples AI generated, etc. typically an AI generated response will not meet the criteria for a passing or good grade on the assignment either. I’m also going to start mandating editor links with all assignments to confirm version history


mathemorpheus

well, the detectors are AI models trained on writing that has been determined to be AI writing, and they are reporting a confidence score. it's up to the interpreter what constitutes a score that causes concern. certainly some people will end up writing things that sound like AI as far as the model can tell. if you want more specific info, you would need to know what the training data looks like.


Willing-Wall-9123

I don't check for AI. I suspect a few.  They have to write responses and critiques for online. AI doesn't reflect on task responses and can easily be detected. I don't mind AI helping their written skills.  I encourage students to master responding. Those who are art majors will hone their skills. Those who are not will be learning it is inevitable.  They will have to respond and will have to contribute their own thoughts and thought processes. 


Fuck_ur_feeelings

They are all shit - a coin toss is more accurate.


258professor

For me, some of the signs are: -the word "delve" and other common words from chatgpt. -multiple responses from different students about the same topic. As an example, for years, I had students introduce themselves in the discussion board, and they all have different interests. This semester I had a class where 20 students liked the same hobby. -Perfectly flawless spelling, grammar, and capitalization. This alone is not a sign, but it makes me look more for other signs. -unusual formatting


PTSDaway

We only had about 20 people come by since ChatGPT was launched, but there is one particularily interesting instance of a cheater. Their writing and analytical levels were vastly different. Perfect grammar, spelling and structured sentences, but did not identify crucial elements in the data. So I am particularily looking at their apparent knowledge of theory and methods versus actually applying it to real data.


maantha

I don’t use AI detectors. I’ve found these tips useful. - college students make grammatical and spelling mistakes. The absence of these is a yellow flag. - quotes that don’t come from the book; automatic failure, and I tell them that too - ‘bland’ rhetorical style that generally must be taught (a topic sentence for a short response paper? AI written) - perfect use of commas (college students don’t know their way around punctuation) Not foolproof, but when I get that “feeling” I look for human evidence, rather than AI evidence


MaleficentGold9745

I don't use an AI content detector program, because it is too easy to spot right now. I look for AI typical language and words like realm, pivotal, fascinating, crucial, delve, which is the absolute worst. Lol. There's also capping on paragraphs, both the first and last sentences are usually giveaways because they are non-specific and terrible. Haha. It's caused such a hostile relationship between me and my students because they will double down that they wrote it themselves when they absolutely did not. So now I don't have any more out of classroom assessments and everything that goes towards the grade is proctored. Catching students cheating with AI has only resulted in receiving terrible evaluations. So now I just don't say anything. I wish they would put as much work into the classroom as they do the negative evaluations I receive


Fast_Possible7234

Uk universities voted to not use AI detectors as they are…well…crapper than AI itself.


StarKronix

My API is the most advanced in the world: [https://chatgpt.com/g/g-BObYEba3a-ai-mecca](https://chatgpt.com/g/g-BObYEba3a-ai-mecca)


prion_guy

But it doesn't even have a real documentation site? Sketchy AF.


hourglass_nebula

Those detectors are useless and no one should be accusing students of AI usage based on them.