Building an LLM-Powered Slide Deck Generator with LangGraph

Creating presentation slides from just a single prompt is now feasible with large language models (LLMs). In this tutorial, we will build a fully automated system that takes a raw, detailed question or topic and generates a multi-slide presentation in response. The system will use an LLM (like OpenAI’s GPT API or a local model via Ollama) to produce slide content, and leverage LangGraph for orchestrating the workflow. We’ll cover both the backend logic (in Python) and a simple frontend (in React) to showcase the results. The goal is a fast and unique slide generation pipeline that outputs a beautiful presentation (PPTX or PDF) with minimal human effort.

What you will learn:

How LLMs can automate slide content creation (outlines, bullet points, etc.) . Using LangGraph to coordinate multi-step tasks (like web searches for facts or image fetching) in a reliable, stateful workflow . Generating PowerPoint slides programmatically with Python (using the python-pptx library ). Building a simple React UI that allows the user to input a prompt, trigger slide generation, and choose the output format (web preview, PPTX, or PDF). By the end, you’ll have a blueprint for an app that takes a single prompt and returns a set of slides — an LLM-driven slide deck generator that is fully autonomous from prompt to presentation.

[React UI]
| POST /generateSlides (prompt, format=pptx|pdf)
v
[FastAPI/Flask]
v
[LangGraph App State]
├─ plan_slides (LLM → outline JSON)
├─ fill_content (LLM per-slide bullets) [optional if plan returns bullets]
├─ enrich (search/images) [optional]
└─ build_slides (python-pptx → deck)
└─ convert_to_pdf (LibreOffice/API) [optional]
v
[File Store] → download link returned to UI

Designing the Prompt-to-Slides Workflow

Before diving into code, let’s outline how the system will work. The process involves several steps, which we can orchestrate with LangGraph to be fully autonomous:

  1. User Input (Prompt): The user provides a detailed question or topic description. This could be a single question (e.g., “Explain quantum computing basics”) or a raw text passage that needs summarizing into slides.
  2. Slide Outline Generation: The backend uses the LLM to parse the prompt and create a presentation outline. This might include deciding on the number of slides and each slide’s title or main point. For example, the LLM might output a list of slide titles or a JSON structure outlining each slide’s content.
  3. Content Population: For each slide in the outline, the LLM (or a set of LLM calls) generates the detailed content:
  4. Data & Media Enrichment (Optional): If the prompt requires factual data or visuals:
  5. Slide Deck Assembly: Using the content generated, the backend creates the actual slides file. We will use python-pptx to create a PowerPoint presentation:
  6. Output Format Selection: Once the PPTX is generated, the user can choose the format:

Each of these steps can be implemented as nodes in a LangGraph workflow, enabling a controlled sequence. LangGraph allows us to treat each step as a module (node) and manage state between them (e.g., passing the outline to the content generator, then passing all content to the PPTX creator). This graph-based design is more maintainable than a single monolithic script, especially if we add complexity (like conditionally doing web search or handling different output formats).

1) Define the LangGraph state & skeleton (Python)

# app/graph.py
from typing import TypedDict, List, Dict, Optional
from langgraph import StateGraph, START, END

class SlideGenState(TypedDict, total=False):
user_prompt: str
slides_outline: List[str]
slides_content: List[Dict] # {title_text, text:[...], image_desc?}
output_format: str # "pptx" | "pdf"
output_path: str
errors: List[str]

workflow = StateGraph(SlideGenState)

Why JSON structure? It’s robust for parsing and assembling slides.

2) Plan slides (single LLM call → structured outline)

# app/nodes/plan.py
import json
from typing import cast
from openai import OpenAI
from .types import SlideGenState

client = OpenAI() # or wire your own client

PLAN_PROMPT = """You are a slide deck creator.
Given the prompt, output a JSON array of slides.
Each slide: { "title_text": "...", "text": ["• ...","• ..."], "image_desc": "optional" }.
First element may be a title slide with 'title_text' and optional 'subtitle_text'.
Keep bullets concise (<= 12 words each)."""

def plan_slides_node(state: SlideGenState) -> SlideGenState:
messages = [
{"role": "system", "content": "Return ONLY valid JSON."},
{"role": "user", "content": f"{PLAN_PROMPT}\n\nPrompt:\n{state['user_prompt']}"}
]
resp = client.chat.completions.create(model="gpt-4o-mini", messages=messages, temperature=0.3)
raw = resp.choices[0].message.content
try:
slides = json.loads(raw)
except Exception as e:
state.setdefault("errors", []).append(f"JSON parse error: {e}")
slides = [{"title_text": "Untitled", "text": []}]
state["slides_content"] = cast(List[Dict], slides)
state["slides_outline"] = [s.get("title_text","") for s in slides]
return state

Add node + edge:

# app/graph.py (continued)
from app.nodes.plan import plan_slides_node
workflow.add_node("plan_slides", plan_slides_node)
workflow.add_edge(START, "plan_slides")

This mirrors the “single-call JSON plan” pattern for speed and determinism.

3) (Optional) Fill content per slide (precision mode)

Useful if you want deeper bullets per slide:

# app/nodes/fill.py
from openai import OpenAI
from .types import SlideGenState

client = OpenAI()

def fill_content_node(state: SlideGenState) -> SlideGenState:
outline = state.get("slides_outline", [])
enriched = []
for title in outline:
if not title.strip():
continue
r = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": f"Create 3–5 crisp bullets for slide '{title}'. "
f"Keep under 12 words, no fluff.\nContext:\n{state['user_prompt']}"
}],
temperature=0.4
)
bullets = [b.strip("• ").strip() for b in r.choices[0].message.content.split("\n") if b.strip()]
enriched.append({"title_text": title, "text": bullets})
state["slides_content"] = enriched or state.get("slides_content", [])
return state

Wire it if you want depth:

workflow.add_node("fill_content", fill_content_node)
workflow.add_edge("plan_slides", "fill_content")

4) (Optional) Enrich with search/images

Add a node to fetch a stat or an Unsplash image per image_desc. Keep it modular so you can toggle it. (Design guidance: tool-use as a node, update slides_content[i][“img_path”] and let the builder place it.)

5) Build the PPTX (python-pptx)

# app/nodes/build.py
from pptx import Presentation
from pptx.util import Inches, Pt
from .types import SlideGenState
from pathlib import Path

def build_slides_node(state: SlideGenState) -> SlideGenState:
slides = state.get("slides_content", [])
prs = Presentation() # or Presentation("template.pptx")
# Title slide
if slides:
title = slides[0]
s0 = prs.slides.add_slide(prs.slide_layouts[0])
s0.shapes.title.text = title.get("title_text","")
try: s0.placeholders[1].text = title.get("subtitle_text","")
except IndexError: pass

# Content  
for slide in slides\[1:\] if slides else \[\]:  
    s = prs.slides.add\_slide(prs.slide\_layouts\[1\])  
    s.shapes.title.text = slide.get("title\_text","")  
    tf = s.placeholders\[1\].text\_frame  
    tf.clear()  
    for bullet in slide.get("text", \[\]):  
        p = tf.add\_paragraph()  
        p.text = bullet  
        p.level = 0  
        p.font.size = Pt(20)  
    # Optional images  
    for img in slide.get("img\_path", \[\]) if slide.get("img\_path") else \[\]:  
        s.shapes.add\_picture(img, left=Inches(0.5), top=Inches(3), height=Inches(2.5))  

out = Path("output") ; out.mkdir(exist\_ok=True)  
file = out / "deck\_output.pptx"  
prs.save(file.as\_posix())  
state\["output\_path"\] = file.as\_posix()  
return state

This mirrors the recommended “iterate slide data → populate PPTX placeholders” approach.

(Optional) PDF conversion tip: call headless LibreOffice or a cloud converter in a branch node

Add node + END:

workflow.add_node("build_slides", build_slides_node)
workflow.add_edge("fill_content", "build_slides") # if using fill step
workflow.add_edge("plan_slides", "build_slides") # else direct
workflow.add_edge("build_slides", END)

app = workflow.compile()

6) Expose an API (FastAPI)

# main.py
from fastapi import FastAPI
from pydantic import BaseModel
from app.graph import app as graph_app

class GenReq(BaseModel):
prompt: str
format: str = "pptx"

api = FastAPI()

@api.post("/api/generateSlides")
def generate(req: GenReq):
state = {"user_prompt": req.prompt, "output_format": req.format}
result = graph_app.invoke(state)
return {"file": result["output_path"]}

# run: uvicorn main:api --reload --port 8000

7) Minimal React client (Vite + TS)

// slide-ui/src/App.tsx
import { useState } from "react";

export default function App() {
const [prompt, setPrompt] = useState("");
const [format, setFormat] = useState<"pptx"|"pdf">("pptx");
const [loading, setLoading] = useState(false);
const [file, setFile] = useState<string | null>(null);

const onSubmit = async () => {
setLoading(true); setFile(null);
const r = await fetch("http://localhost:8000/api/generateSlides", {
method: "POST",
headers: {"Content-Type": "application/json"},
body: JSON.stringify({ prompt, format })
});
const data = await r.json();
setFile(data.file);
setLoading(false);
};

return (
<main style={{maxWidth: 820, margin: "40px auto", padding: 16}}>\

AI Slide Deck Generator


<textarea
value={prompt}
onChange={e=>setPrompt(e.target.value)}
placeholder="Topic, audience, tone, key points…"
rows={8}
style={{width:"100%"}}
/>
<div style={{margin:"12px 0"}}>\ {" "}\ \
<button onClick={onSubmit} disabled={loading || !prompt.trim()}>
{loading ? "Generating…" : "Generate Slides"}\
  {file && (  
    <p style={{marginTop:12}}>  
      ✅ Done — <a href={\`http://localhost:8000/${file}\`} download>Download deck</a>  
    </p>  
  )}  
</main>  

);
}

You can serve the output/ directory statically in FastAPI or add a /download route.

8) Power features (add as you scale)

  • Variants & A/B: Branch in LangGraph to produce n narrative styles (executive, data-first, visual-heavy) from the same prompt.
  • Guardrails: Add a node to enforce slide/word limits, reading levels, and banned phrases before build.
  • Images: Create an images_node that turns image_desc into Unsplash/DALLE requests and injects img_path.
  • Reliability: Persist state; if an LLM call fails, resume from the last good node. LangGraph shines here.
  • Templates: Load Presentation(“template.pptx”) for brand fonts, colors, and master layouts.

9) Prompt recipe (copy-paste)

Role: Senior presentation designer
Goal: Create a crisp, 10–12 slide deck from ONE prompt.
Style: concise, jargon-free, bullets < 12 words, audience-aware.
Output: JSON array:
{ "title_text": "", "subtitle_text": "",
"text": ["• ...","• ...","• ..."],
"image_desc": "optional visual concept" }
Slides:

  1. Title
  2. Problem
  3. Why now
  4. Core idea
  5. System/Architecture
  6. Key metrics
  7. Use cases
  8. Risks/Assumptions
  9. Roadmap
  10. Call to action

10) Testing checklist

  • Same prompt twice → identical outline (deterministic plan node).
  • Deck < 12 slides unless user asks for more.
  • No bullets > 12 words (assert and trim).
  • Empty prompt → 400 from API.
  • Output file exists and opens in PowerPoint/Slides.

Grab-and-Go Repo Structure

/app
/nodes
plan.py
fill.py
build.py
enrich.py # optional
graph.py
main.py # FastAPI
/template/template.pptx
/output/.gitkeep
/slide-ui/ # React (Vite)

This workflow turns a single high-quality prompt into a fast, repeatable, and brand-consistent slide deck. Start with the linear path; then add enrichment, style variants, and approval gates as your needs evolve. Your team — and your calendar — will thank you.

Build it and share back: try the implementation, then comment with how far you got and what you improved. If this helped, please like and share so more builders can benefit. I’ll publish the working codebase here shortly for you to clone and extend.

Home - Wiki
Copyright © 2011-2026 iteam. Current version is 2.155.0. UTC+08:00, 2026-03-24 08:14
浙ICP备14020137号-1 $Map of visitor$