月考命題不想用出版社現成的,何不自己來寫學校的活動、故事,然後出得很像國中英語會考的閱讀測驗?這在以前非常花時間、也很有可能會出現不自然的英文。現在有了AI,幾分鐘就可以生成一篇既符合學生程度、又跟他們學校生活直接相關的閱讀測驗,而且保證非常的素養和會考!
Writing your own reading passages? It’s a nightmare. Trying to design question sets that actually mimic the discriminative difficulty of the CAP? Even harder. But I found a workaround, with a workflow that actually works so very effectively.
We all know the problem with
textbook publisher materials. They are not relatable to students’ real life or
interests. Or worse, students might see the exact same reading passages and
tests at cram school three weeks ago. So, we write our own? Sure. That’s what I
have done before AI. It’d be really challenging and time consuming, not to
mention awkward phrasing. After all, we’re not native speakers.
Guess what? I ran a workshop recently to fix this.
We built a method to turn local school news into CAP-grade tests without the
headache, and I was glad to see how the participants react to this approach.
Here is the workflow:
1. Smart Translation
Don't start from scratch; start with reality. Grab a
school Facebook post about the English Singing Contest. A news snippet about an
international postcard exchange. Something they care about and relatable.
Use Gemini or ChatGPT to do the heavy lifting. But
don't just say "translate." Use this specific command:
"Translate the following text into English,
ensuring it sounds natural and has no awkward phrasing for native speakers.
Make it understandable for 2nd graders."
Why? Because it fix the problem for a massive
vocabulary list of the word bank. You can adjust the difficulty level before you
start making the exam, not after.
2. The NotebookLM Magic
Here is the secret sauce. I’ve tried to create a
master prompt with Gemini but it wasn’t doing what I’ve imagined. You have to
train it first with real, previous CAP tests as sources on NBLM.
l Feed: Upload the
last three years of official CAP tests into Google NotebookLM.
l Analyze: Ask the AI
to break down the genre, word counts, and logic.
l Input: Paste your
translated text from Step 1.
l Execute: Tell it to
generate a question set that mimics the exact style of the
analyzed data.
3. The Human Filter
AI could go wrong. We found an obvious flaw in a
question stem, with the intended inferencing phrase totally unnecessary. So,
you still have to read it and swap out the obscure words when necessary. The AI
gets you 90% of the way there, and you’ll only have to work on the last 10%.
Before you hit print, check for grammatical errors, as AI can sound confident
yet still hallucinate.
This method helped me build two full reading sets
that stood up to strict peer scrutiny by the workshop participants. Combining
human intuition with generative tech takes the pain out of the process—and
honestly, it gave me the confidence to show other teachers how to do it for next
semester’s EAT school-based services.
.jpg)



No comments:
Post a Comment