That is where knowing your students comes in. If a student submits AI generated content for their EE (or their IA), it’s fairly easy to sus out in my subject, and even easier to pick up since I know that student and what they’re capable of academically.The rate of false positives is way too high for accusations and bad grades to be thrown around aimlessly.
I would rather give feedback on where I’ve caught them out and what they can do to rectify the situation. That’s why failsafes like the engagement criterion (and regular checkins) come in in the EE.Moreover as you said, IB allows students to use AI generated content as long as it is cited properly. Just because students get banned from using AI in one stage of their lives, doesn’t mean they’ll keep using it simultaneously in other stages and even future stages. Banning it is a losing battle.
I don't think IBO jumped the gun. They saw the forest for the trees and clearly understood AI is a technology that is here to stay. It's kind of like saying we should ban internet sources because students will just copy and paste from websites...AI will eventually be as ubiquitous as the internet, so why not encourage and foster its proper usage the same way we do for online sources in education?
IMO the organisations that jumped the gun are the ones who acted in a reactionary manner and banned the use of AI for assessments...might as well just ban the use of internet resources and electronic devices altogether and go back to classic penmanship under direct supervision.
Students have been skirting the rules around plagiarism and academic honesty since the beginning of education...we've learned that prohibition rarely works especially in cases where the thing you are trying to prohibit is widely available at little to no cost...but now we're supposed to believe by simply declaring AI as unwelcome in academia its use will suddenly go away?
false equivalence. the internet is a tool, it can be used for any number of things. same can't be said for AI, at least from what i've seen. what's an example of "proper usage" of AI related to academics?
IBO absolutely jumped the gun. They released their guidelines weeks after ChatGPT was released. Their guidelines are not "proper use" at all. "It's ok to use ChatGPT as long as you cite it"? What if ChatGPT is dead wrong about something, or the assessors can't reproduce the prompt results? I very much doubt if IBO considered this.
This is the direction a lot of schools are now going in. I'd do it if I had the class time, but for now I'm collecting work digitally and putting it all through turnitin. It's accurate enough to be a deterrent if you understand how false positives express themselves.
This is why academic honesty policies have such low levels of tolerance. It might be unlikely for a student to get caught on any one given assignment, but if he/she violates the academic honesty policy on all his/her work, he/she will inevitably get caught some day. Even if you pay someone to write your essay for you, chances are that person also plagiarized at least some of it. As an aside, given the news as of late I doubt if ChatGPT can continue to be made available for free indefinitely. It isn't a magic tool bestowed on us by heaven. It's very resource-intensive to run and will have to start making money some day.
AI has been a tool that has been used for decades now...what we know as chatgpt is just one form of one of the latest versions of AI. All those google searches you've been making all these years have involved the use of AI some form or another.