• 首页
  • 期刊简介
  • 编委会
  • 投稿指南
  • 收录情况
  • 杂志订阅
  • 联系我们
引用本文:【点击复制】
【打印本页】   【下载PDF全文】   【查看/发表评论】  【下载PDF阅读器】  
←前一篇|后一篇→ 过刊浏览
分享到: 微信 更多
教育场景中融合SVD与混合量化的轻量级微调算法研究
冯丽娟, 张亚超, 宋夏楠
郑州科技学院
摘 要: 大语言模型在教育智能应用中展现出巨大潜力,但其全参数微调对显存与计算资源的高需求成为实际部署的主要瓶颈。为此,本研究提出一种面向教育数据的自适应量化低秩微调算法——Edu-ADAQ-LoRA。该方法在低秩自适应框架中引入奇异值分解,以动态评估参数重要性,并据此实施分层混合精度量化与动态掩码,从而在保持模型表达能力的前提下,显著降低微调资源消耗。基于ChatGLM2-6B模型,在School_math与Alpaca_Chinese等教育数据集上的实验表明:相较于传统LoRA与QLoRA等方法,Edu-ADAQ-LoRA在文本生成指标上达到相当或更优水平,同时训练显存占用降低约5%–30%。多模型及多场景实验进一步验证了算法的兼容性与泛化能力。本研究为资源受限环境下教育大模型的高效、轻量化微调与部署提供了可行的技术方案。
关键词: 大语言模型  教育人工智能  高效微调  自适应量化  低秩自适应
中图分类号:     文献标识码: 
基金项目: 2025河南省高等学校重点科研项目《面向未来教学场景下基于深度学习的语言自适应模型研究》(编号:25B413010)
Research on Lightweight Fine-Tuning Algorithms Integrating SVD with Hybrid Quantization for Educational Contexts
FENG Lijuan, ZHANG Yachao, SONG Xianan
School of Electronics and Electrical Engineering
Abstract: Large language models show significant potential in educational intelligence applications, but the high demand for memory and computational resources during full-parameter fine-tuning poses a major bottleneck for practical deployment. To address this, we propose an adaptive quantized low-rank fine-tuning algorithm for educational data, Edu-ADAQ-LoRA. This method introduces singular value decomposition into the low-rank adaptation framework to dynamically assess parameter importance, based on which hierarchical mixed-precision quantization and dynamic masking are applied, significantly reducing resource consumption while preserving model expressiveness. Experiments on the ChatGLM2-6B model using educational datasets such as School_math and Alpaca_Chinese demonstrate that Edu-ADAQ-LoRA achieves comparable or better performance on text generation metrics compared to methods like LoRA and QLoRA, while reducing training memory usage by approximately 5%–30%. Additional experiments across multiple models and scenarios further verify the algorithm's compatibility and generalization capability. This study provides a feasible technical solution for efficient and lightweight fine-tuning and deployment of large educational models in resource-constrained environments.
Keywords: Large Language Models  Educational Artificial Intelligence  Efficient Fine-tuning  Adaptive Quantization  Low-Rank Adaptation


版权所有:软件工程杂志社
地址:辽宁省沈阳市浑南区新秀街2号 邮政编码:110179
电话:0411-84767887 传真:0411-84835089 Email:semagazine@neusoft.edu.cn
备案号:辽ICP备17007376号-1
技术支持:北京勤云科技发展有限公司

用微信扫一扫

用微信扫一扫