微调 Gemma 3:带有财务问答数据集的逐步指南
Fine-tune the new Gemma model using the finance reasoning dataset to improve its accuracy and adopt the style of the dataset.
使用金融推理数据集微调新的Gemma模型,以提高其准确性并采用数据集的风格。
Mar 21, 2025 · 11 min read
2025年3月21日 · 阅读时间 11 分钟
Learn to develop large language models (LLMs) with PyTorch and Hugging Face, using the latest deep learning and NLP techniques.
学习使用 PyTorch 和 Hugging Face 开发大型语言模型 (LLMs),运用最新的深度学习和自然语言处理技术。
Fine-tune Llama for custom tasks using TorchTune, and learn techniques for efficient fine-tuning such as quantization.
使用TorchTune微调Llama以满足自定义任务,并学习高效微调的技术,如量化。
Unlock more advanced AI applications, like semantic search and recommendation engines, using OpenAI's embedding model!
使用 OpenAI 的嵌入模型解锁更高级的 AI 应用,如语义搜索和推荐引擎!
[
[
See More
查看更多
](https://www.datacamp.com/category/artificial-intelligence)
](https://www.datacamp.com/category/artificial-intelligence)
Related
相关内容
Learn how to run inference on GPUs/TPUs and fine-tune the latest Gemma 7b-it model on a role-play dataset.
学习如何在GPU/TPU上进行推理,并在角色扮演数据集上微调最新的Gemma 7b-it模型。
This is a simple guide to fine-tuning Gemma 2 9B-It on patient-doctor conversations and converting the model into GGUF format so that it can be used locally with the Jan application.
这是一个简单的指南,介绍如何在患者-医生对话上微调Gemma 2 9B-It,并将模型转换为GGUF格式,以便可以在Jan应用程序中本地使用。
Learn to infer and fine-tune LLMs with TPUs and implement model parallelism for distributed training on 8 TPU devices.
学习如何使用TPU进行推理和微调LLM,并在8个TPU设备上实现模型并行以进行分布式训练。
Discover Microsoft's new LLM series and boost your model's accuracy from 65% to 86% by fine-tuning it on the E-commerce classification dataset.
发现微软的新LLM系列,通过在电子商务分类数据集上微调,将模型的准确性从65%提升到86%。
Learn how fine-tuning large language models (LLMs) improves their performance in tasks like language translation, sentiment analysis, and text generation.
了解微调大型语言模型(LLMs)如何提高它们在语言翻译、情感分析和文本生成等任务中的表现。
Learn how to use PaliGemma 2 Mix to build an AI-powered bill scanner and spending analyzer that extracts and categorizes expenses from receipt...