Abstract
Large language models (LLMs) are increasingly being used for academic advising and student dashboards. However, sending complete student records to third-party model providers may lead to serious privacy and governance issues. This paper examines whether using a Model Context Protocol (MCP)-based, per-field API design can help reduce student data leakage in advising while maintaining response quality. To do the experiment, we generated fully synthetic records that represent typical U.S. four-year undergraduates. We then compared a traditional approach, where each LLM request includes the whole student profile, with an MCP approach, where the model can only access individual fields through specific tools. We measured data leakage by defining a tiered leakage score, which gives more weight to identifiers and academic performance. We also evaluated the quality of the responses from LLM under both approaches using an LLM-as-a-judge setup. In all runs, the traditional method consistently produced the highest average leakage score, while the MCP method resulted in a much lower ALS. This difference is largely due to the model not using tools that reveal identifiers like student ID, name, or email. Overall response quality is lower under MCP, but this difference varies by task. Besides, MCP also offers detailed transparency about which fields are accessed and when, which helps with institutional audits and governance. We conclude that MCP can significantly reduce the exposure of confidential student information while still providing useful advising for tasks that require less data. However, if we want LLM to provide high quality support that need more data, then we need to design it carefully and take privacy into account.