ai-assisted revelations
In Slop Content Festival, Farshad feels regret for the two times where he augmented homework with an LLM. On the contrary, they seem to have helped him pivot to more interesting subjects, realize laziness as efficiency, and realize how poor the state of education is.
However, how can he confirm that he only used AI twice? Knowing when you are using AI systems can be difficult. I'm shocked that a big anti-AI advocate would even surf the web with its immense amount of built-in automation. Can you think of times when you inadvertently used, consumed, or otherwise interacted with AI without full knowledge of it?
My overall lesson from Farshad's first use case is to replace pointless, unwanted, or boring matters with pointful, wanted, and exciting ones where possible. Swap out optional assignments, self-directed topics, and other educational artifacts with more interesting and fulfilling options. I'm reminded of this quote that's often misattributed to Bill Gates:
I will always choose a lazy person to do a difficult job because a lazy person will find an easy way to do it.
I found it odd that ChatGPT was labeled as an arch nemesis. Its suspicious sources for an already-boring subject helped Farshad pivot to a new topic closer to his experiences and studies.
As for the second use case of written essays being flagged as AI-generated, AI detectors are known to be unreliable and throw many false positives. Submit reference material or the professor's own writing to verify any detectors' accuracy before starting an assignment. Lazy strategies as such could identify root problems quicker than rewriting thousands of words thrice in under 12 hours. Being met with a 100% false positive after each submitted rewrite would drive me mad too.
I'm surprised that the ire is toward AI and not education for forcing detector usage. Without such a detector in place, would having used AI only to source relevant papers been branded an issue at all?
I do find anti-AI sentiments a bit overblown. Generated AI art and other AI applications are so irrelevant and unnecessary that they're worth the time and effort to cover. Saturated catchphrases like "AI slop", "soul", and "lived experience" make me think that speakers worship the purity spiral of "originality" more and more. Should directors be ashamed of using generative AI the same way artists should be ashamed for using other tools?
Tread carefully with regard to such certain predictions as "never" within the scope of emerging technology. You never know when it could stop applying. Speaking of, Farshad tracking down documents about history with AI lines up with Elmer's AI Uncovers Hymn, where AI recently helped unlock ancient human art:
A hymn buried on tablets from an ancient Babylonian library has been deciphered with AI assistance, reports Science Alert. Researchers reconstructed the 3,400-year-old lyrics praising storm god Adad, revealing insights into ancient spirituality. So how did such ancient hymns sound?
Would you agree that there's at least some semblance of value in such a discovery?
Want to reach out? Connect with me however you prefer:
- Email me via your mail client
- Copy my email address or remember it for later:
yoursimperfect@proton.me
- Email me via Letterbird contact form or open it in a new tab