Use of AI in SCA Research and Writing

Saito Takauji, OL

Matthew W. Parker, JD

Some time ago, a question was posed on the “Ask the SCA Laurels Anything” Facebook group about the view of the Order on the use of AI in the Society. I answered it on the post, but I wanted to take the time to more fully discuss the matter here.

For the purpose of this exploration, we have to set some boundaries or we can’t really even get to it. The use of AI is, in my opinion and the opinion of many others, per se unethical. AI uses an insane amount of resources (see: https://www.forbes.com/sites/cindygordon/2024/02/25/ai-is-accelerating-the-loss-of-our-scarcest-natural-resource-water/ and https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117 for examples), and the major generative AI are being trained on stolen material (see: https://www.wcnc.com/article/features/originals/charlotte-artist-elliana-esquivel-artificial-intelligence-ai-scrape-artwork/275-b7c79345-b9cf-4dd4-b685-459515f6c25f and https://www.kcur.org/podcast/up-to-date/2025-05-25/ai-is-being-trained-on-stolen-books-heres-what-a-bestselling-kansas-author-has-to-say-about-it for examples) which many legal scholars (myself included) do not believe is fair use under copyright.

We also have to deal with the emerging research which shows that using AI might lead to significant less understanding both of the topic and of what ‘you’ actually ‘wrote’ than not using AI. A recent study found that people relying on ChatGPT “had the ‘weakest’ brain connectivity and remembered the least about their essays, highlighting potential concerns about cognitive decline in frequent users.”[1] This raises serious concerns about even the argument that we should allow the use of AI as a method of overcoming disability or trauma, as if replicated it indicates it could be more harmful to use than not.

For this discussion, however, we’re going to ignore all of those in order to discuss the philosophical place of AI in the SCA. So, for the purpose of the discussion let us assume that the AI being used is one that only uses as much electricity and water as a normal computer server or google search, and which has only been trained on either public domain or work provided voluntarily by artists who have been fairly compensated for their work. Again, this isn’t either true or possible currently, and the real answer is ‘don’t use AI.’ But to have a discussion we must move beyond that, while acknowledging the theoretical nature of it.

The original post listed three different scenarios, to which I am going to add one more. The original post asked about research:

  • Made with sources found by AI
  • A first draft made by AI tweaked by the SCA member; and
  • Wholly written by AI

I am going to add the following:

  • Using AI as essentially a more intelligent spellcheck and grammar check, helping to tighten up a work primarily made by the SCA member.

The guiding principle I have here is this: The SCA is a celebration of human endeavor. We have an entire peerage order dedicated to the idea that researching things and making things is a worthwhile field that deserved to be separately recognized in the SCA. I would never argue that the SCA should judge someone for whether the clothes they’re wearing or judge someone for purchasing their clothing rather than making it themselves; to do so would be utter hypocrisy, as I have never once in the SCA worn clothing I’ve made myself. But we have long recognized that making—[2]learning to make by itself, and especially going through the rigor of actually making it yourself—is worthy of separate recognition. And if you zoom out, all of our peerage orders are about recognizing human effort, whether it is in showing up and serving, or learning and excelling in physical efforts.

And to that end, we recognize that there is a place for assistance in those human efforts when they are done to put someone who is otherwise disadvantaged on a level playing field. On the rapier/cut and thrust field we have always outlawed the use of pistol or orthopedic rapier handles unless they are medically necessary[3], and on the archery field the use of modern adaptations to make it easier to pull back a bow. As a Laurel, I would never see a problem with someone using an adaptation to help them deal with dyslexia in writing their documentation any more than I would view it as an issue that I use glasses to correct my nearsightedness.

But that’s the distinction—to put someone on a level playing field. An orthopedic rapier grip allows someone with a hand or wrist injury to compete where they otherwise wouldn’t, and my glasses allow me to see what I’m being shown the same as someone with naturally 20/20 vision. Neither of them provides a competitive advantage or reduces the level of effort required to master the skill.

And so that is the guiding philosophy I have when examining our use of theoretically ethical AI in SCA research and writing. The use of ethical AI[4] in the SCA is acceptable to the extent that it allows someone who is otherwise disadvantaged from participating to do so on a level playing field with other participants; and it is absolutely not acceptable to the extent that it gives someone an advantage they have not earned, or allows them to substitute someone else’s work for their own. It is also acceptable to the extent that it is being used as a purely mechanistic method of editorial assistance, or where it is only providing possible avenues of information which are then considered and acted on by the individual.

Viewed that way, the categories I listed right at the beginning become easier to categorize. Using AI as nothing more than a spellcheck or grammar check is almost trivially acceptable, for the same reason why it isn’t going to be held against me that I used spellcheck on this article. And the use of documentation entirely written by AI is right out, because it contains no actual human effort within it.

I also think it is important to distinguish here between research and the product created from it. If someone were to enter only a research paper that had been written by AI into an A&S competition, I would award them zero points in any category—they did not participate in its creation beyond the prompt, and no matter how cunningly crafted the prompt is that is not worth points. But if they are instead entering something they created—say, a tunic—with AI documentation, I am willing to say they should get appropriate points for all the categories except documentation. They should, in my opinion, get zero points for documentation; and their work should be looked at carefully to ensure that there is nothing in it which relies on AI hallucination as opposed to actual practice. But they should get an appropriate number points for their craftsmanship, ambition, execution, and all the other things that any specific Kingdom judges A&S entries on.

Made with sources found by AI is more difficult, and here we have to return to the boundaries of our theoretical AI. In the world as it actually exists, AI has a serious problem with making up sources. This has caused problems in law[5], public health policy[6], and airline customer service[7], among others. So, to the extent that it continues to be prone to making up things it thinks the user wants to hear, I would put research using AI sources into the forbidden category. To the extent that this problem is solved in our hypothetical ethical AI, I would view it as being no different than using Google. Which is to say that it would be (theoretically) perfectly acceptable as a starting point, but that the development of a discerning eye toward which sources are valid, and which are suspect, is part of the development we expect to see out of someone on the Laurel path.

And finally, we have the theoretical documentation which was written as a first draft by an AI but was tweaked by the SCA member. This one is incredibly dependent on the extent to which the draft is ‘tweaked’, to my mind. If the end result is such that the primary author of the work is the SCA member, who essentially only used the AI as a prompt to help them come up with some structure or word choice, then it is probably acceptable; but if the end result is a paper which was primarily written by the AI and only had the veneer of the SCA member’s style put on it, then it is no different than the documentation written solely by AI without the ‘tweak’. To my mind, essentially, if the AI would justifiably be listed as the primary author, or have a co-author credit, this shouldn’t be allowed; if it would justifiably be thanked in the first footnote for its assistance, then it would likely to acceptable.[8]

There’s an important thing to note with this last category. One of the arguments offered by proponents of AI is that it can help people with things like structure, grammar, and word choice, to help them appear more professional in the presentation of their ideas. And it can be used for that, but at what cost to the individual? Structure, grammar, and word choice are not incidental to the craft of writing—they are the craft of writing. If you ask AI for input on those things and then consider what made the AI’s version better, that’s one thing. If you’re delegating those things to an AI, you’re not developing those skills yourself. You’re ultimately depriving yourself of the opportunity to develop as a writer by asking a machine to do it for you, and you’ll never get to the point where you don’t need the machine.[9]

And ultimately that is the point of this extremely long-winded exploration. Even in a world where we have our hypothetical ethical and non-detrimental AI, and even in those circumstances where it would be acceptable to use that AI, I would still ask an SCA member to consider the purpose behind their use. You do not need to be an amazing writer to become a Laurel, and you do not need to have perfect research either. It is, ideally, a journey of personal growth, where you develop those skills and teach them on to others. If you’re not doing that, even if it is ethically acceptable to use a substitute, are you really walking the path you want to be walking?

(And at the risk of undermining that banger of an ending, don’t use AI in the SCA—it is absolutely unethical, horribly resource consuming, and might actively harm your understanding of topics)


[1] Katie Hawkinson, ChatGPT use linked to cognitive decline, research reveals, The Independent (June 20, 2025). https://www.the-independent.com/news/world/americas/ai-chatgpt-essays-cognitive-decline-b2774224.html.

[2] Some people have begun flagging the use of em dashes as a sign of AI writing. I don’t use them because AI wrote this, I use them because I have an English degree—and what happens when you go to school for English literature is you stop using semi-colons as much and start using em dashes more. You can pry them and the Oxford comma from my cold, dead, and clammy hands.

[3] This was formerly in the Rapier Marshal’s handbook as an explicit rule. It is not as of the 2024 edition, but as a Marshal this is still my understanding.

[4] Which I cannot stress enough doesn’t exist, but I’m going to stop banging that drum to not get boring and repetitive.

[5] See, e.g., Debra C. Weiss, Sanctions imposed for ‘collective debacle’ involving AI hallucinations and 2 firms, including K&L Gates, ABA Journal (May 14, 2015). https://www.abajournal.com/web/article/judge-imposes-sanctions-for-collective-debacle-involving-ai-hallucinations-and-2-law-firms-including-k.

[6] See, e.g., Nia Prater, Did RFK Jr.’s Crew Use AI to Write Error-Filled MAHA Report?, New York Magazine – Intelligencer (May 30, 2025). https://nymag.com/intelligencer/article/did-rfks-crew-use-ai-to-write-error-filled-maha-report.html.

[7] See, e.g., Ashley Belanger, Air Canada must honor refund policy invented by airline’s chatbot, Ars Technica (February 16, 2024). https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/.

[8] See, e.g., Patricia L. Judd, The TRIPS Balloon Effect, 46 New York University Journal of International Law and Politics 471 (2014). I was one of Professor Judd’s Research Assistants for this article and am thanked in the first footnote. No one (especially not me) would say I wrote it in any meaningful way, despite providing a semester’s worth of research for it.

[9] Obviously this does not apply if the grammar, word choice, or structure help are a result of things such as dyslexia where the issue is not one of developing skills but of adaptation for a disability.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.