Skip to content

Releases: langgenius/dify

v1.10.0-rc1 - Event-Driven Workflows

30 Oct 15:25

Choose a tag to compare

Pre-release

Introduce Trigger Funtionality

Trigger = When something → then do something

This is the foundation for event-driven Workflow capabilities, covering the following types:

  • Schedule (time-based triggers)
  • SaaS Integration Event (events from external SaaS platforms like Slack/Github/Linear, integrated by Plugins)
  • Webhook (external http callbacks)
  • Something to be discuss

All of those features are only designed for Workflow, Chatflow / Agent / BasicChat are not supported.

Image

Design Premise

After careful consideration, we concluded that the start node design cannot fully embody the philosophy behind Triggers.
Therefore, we have redesigned the start node as a component bound to WebApp or Service API.
This means:

  • The workflow input parameters are equivalent to the form defined by the start node.
  • Trigger types can define their own input formats instead of following Dify’s start node format:
    • Webhook: users can freely define the HTTP payload structure they need.
    • Plugins: predefine workflow input parameters for specific third-party platforms.
    • Schedule: only needs a single $current_time parameter.

As a result, we introduced 3 kinds of start nodes, they are all the trigger nodes, and we've already completed the product design, and the UI/UX design is also finished.
The implementation of Trigger will be divided into:

  • WebHook — configuration of webhook-related information in Canvas
  • Schedule — time-based triggers
  • Plugins — plugin system (most third-party platform integrations will depend on this)
WebHook Schedule Plugins
Image Image Image

Why

  1. Enable more scenarios
    Currently, if you want to build something like a Discord ticket bot → Linear in an enterprise setup, you need custom glue code, pre-published extensions, and manual token retrieval from the Discord developer console instead of simple one-click OAuth binding.
    Similar pain points exist for GitHub PR plugin review, Hello Dify email replies, etc., all requiring manual download/upload/trigger actions.
  2. Reduce fragmented experiences
    While it’s possible to achieve similar outcomes using external automation platforms with Dify API calls, the cross-platform experience is fragmented, and many external automation platforms are adding their own LLM orchestration capabilities.
  3. Centralize configuration and management
    Developers often have to host multiple services to poll for events. Endpoints can help, but they are not purpose-built for trigger scenarios, making the configuration flow unintuitive.
  4. Real user demand
    Multiple community members have requested this feature:

Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.10.0-rc1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

  • Replace export button with more actions button in workflow control panel by @lyzno1 in #24033
  • feat: add scroll to selected node button in workflow header by @lyzno1 in #24030
  • feat: comprehensive trigger node system with Schedule Trigger implementation by @lyzno1 in #24039
  • feat: update workflow run button to Test Run with keyboard shortcut by @lyzno1 in #24071
  • feat: Test Run dropdown with dynamic trigger selection by @lyzno1 in #24113
  • fix: simplify trigger-schedule hourly mode calculation and improve UI consistency by @lyzno1 in #24082
  • Remove workflow features button by @lyzno1 in #24085
  • feat: implement Schedule Trigger validation with multi-start node topology support by @lyzno1 in #24134
  • fix: resolve merge conflict between Features removal and validation enhancement by @lyzno1 in #24150
  • Refactor Start node UI to User Input and optimize EntryNodeContainer by @lyzno1 in #24156
  • fix: remove duplicate weekdays keys in i18n workflow files by @lyzno1 in #24157
  • UI improvements: fix translation and custom icons for schedule trigger by @lyzno1 in #24167
  • fix: initialize recur fields when switching to hourly frequency by @lyzno1 in #24181
  • feat: implement multi-select monthly trigger schedule by @lyzno1 in #24247
  • feat(workflow): Plugin Trigger Node with Unified Entry Node System by @lyzno1 in #24205
  • feat: replace mock data with dynamic workflow options in test run dropdown by @lyzno1 in #24320
  • refactor: comprehensive schedule trigger component redesign by @lyzno1 in #24359
  • feat/trigger universal entry by @Yeuoly in #24358
  • feat/trigger: support specifying root node by @Yeuoly in #24388
  • feat: webhook trigger frontend by @CathyL0 in #24311
  • fix(trigger-webhook): remove redundant WebhookParam type and simplify parameter handling by @CathyL0 in #24390
  • feat(trigger-schedule): simplify timezone handling with user-centric approach by @lyzno1 in #24401
  • refactor: Use specific error types for workflow execution by @Yeuoly in #24475
  • refactor: rename RunAllTriggers icon to TriggerAll for semantic clarity by @lyzno1 in #24478
  • fix: when workflow only has trigger node can't save by @hjlarry in #24546
  • fix: when workflow not has start node can't open service api by @hjlarry in #24564
  • feat: implement workflow onboarding modal system by @lyzno1 in #24551
  • feat: webhook trigger backend api by @hjlarry in #24387
  • feat: fix i18n missing keys and merge upstream/main by @lyzno1 in #24615
  • refactor(sidebar): Restructure app operations with toggle func...
Read more

v1.9.2 - Sharper, Faster, and More Reliable

22 Oct 08:48
1.9.2

Choose a tag to compare

This release focuses on improving stability, async performance, and developer experience. Expect cleaner internals, better workflow control, and improved observability across the stack.


Warning

A recent change has modernized the Dify integration for Weaviate (see PR #25447 and related update in PR #26964). The upgrade switches the Weaviate Python client from v3 to v4 and raises the minimum required Weaviate server version to 1.24.0 or newer. With this update:

  • If you are running an older Weaviate server (e.g., v1.19.0), you must upgrade your server to at least v1.24.0 before updating Dify.
  • The code now uses the new client API and supports gRPC for faster operations, which may require opening port 50051 in your Docker Compose files.
  • Data migration between server versions may require re-indexing using Weaviate’s Cursor API or standard backup/restore procedures.
  • The Dify documentation will be updated to provide migration steps and compatibility guidance.

Action required:

  • Upgrade your Weaviate server to v1.24.0 or higher.
  • Follow the migration guide to update your data and Docker configuration as described in the latest official Dify documentation.
  • Ensure your environment meets the new version requirements before deploying Dify updates.

✨ Highlights

Workflow & Agents

Integrations & SDK

Web & UI

  • Faster load times by splitting and lazy‑loading constant files (by @yangzheli in #26794)
  • Improved DataSources with marketplace plugin integration and filtering (by @WTW0313 in #26810)
  • Added tax tooltips to pricing footer (by @CodingOnStar in #26705)
  • Account creation now syncs interface language with display settings (by @feelshana in #27042)

⚙️ Core Improvements


🧩 Fixes


🧹 Cleanup & DevX


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.9.2
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

  • [Chore/Refactor] Implement lazy initialization for useState calls to prevent re-computation by @Copilot in #26252
  • Refactor: Use @ns.route for tags API by @asukaminato0721 in #26357
  • chore: translate i18n files and update type definitions by @github-actions[bot] in #26440
  • Fix: Enable Pyright and Fix Typing Errors in Datasets Controller by @asukaminato0721 in #26425
  • minor fix: fix some translations: trunk should use native, and some translation typos by @NeatGuyCoding in #26469
  • Fix typing errors in core/model_runtime by @asukaminato0721 in #26462
  • fix single-step runs support user input as structured_output variable values by @goofy-z in #26430
  • Refactor: Enable type checking for core/ops and fix type errors by @asukaminato0721 in #26414
  • improve: Explicitly delete task Redis key on completion in AppQueueManager by @Blackoutta in #26406
  • chore: bump pnpm version by @lyzno1 in #26010
  • Fix a typo in prompt by @casio12r in #25583
  • fix: duplicate chunks by @kenwoo...
Read more

v1.9.1 – 1,000 Contributors, Infinite Gratitude

29 Sep 11:35
cd47a47

Choose a tag to compare

Congratulations on having our 1000th contributor!

image

🚀 New Features

  • Infrastructure & DevOps:

    • Next.js upgraded to 15.5, now leveraging Turbopack in development for a faster, more modern build pipeline by @17hz in #24346.
    • Provided X-Dify-Version headers in marketplace API access for better traceability by @RockChinQ in #26210.
    • Security reporting improvements, with new sec report workflow added by @crazywoola in #26313.
  • Pipelines & Engines:

    • Built-in pipeline templates now support language configuration, unlocking multilingual deployments by @WTW0313 in #26124.
    • Graph engine now blocks response nodes during streaming to avoid unintended outputs by @laipz8200 in #26364 / #26377.
  • Community & Documentation:

🛠 Fixes & Improvements

  • Debugging & Logging:

    • Fixed NodeRunRetryEvent debug logging not working properly in Graph Engine by @quicksandznzn in #26085.
    • Fixed LLM node losing Flask context during parallel iterations, ensuring stable concurrent runs by @quicksandznzn in #26098.
    • Fixed agent-strategy prompt generator error by @quicksandznzn in #26278.
  • Search & Parsing:

  • Pipeline & Workflow:

    • Fixed workflow variable splitting logic (requires ≥2 parts) by @zhanluxianshen in #26355.
    • Fixed tool node attribute tool_node_version judgment error causing compatibility issues by @goofy-z in #26274.
    • Fixed iteration conversation variables not syncing correctly by @laipz8200 in #26368.
    • Fixed Knowledge Base node crash when retrieval_model is null by @quicksandznzn in #26397.
    • Fixed workflow node mutation issues, preventing props from being incorrectly altered by @hyongtao-code in #26266.
    • Removed restrictions on adding workflow nodes by @zxhlyh in #26218.
  • File Handling:

    • Fixed remote filename handling so Content-Disposition: inline becomes inline instead of incorrect parsing by @sorphwer in #25877.
    • Synced FileUploader context with props to fix inconsistent file parameters in cached variable view by @Woo0ood in #26199.
    • Fixed variable not found error (#26144) by @sqewad in #26155.
    • Fixed db connection error in embed_documents() by @AkisAya in #26196.
    • Fixed model list refresh when credentials change by @zxhlyh in #26421.
    • Fixed retrieval configuration handling and missing vector_setting in dataset components by @WTW0313 in #26361 / #26380.
    • Fixed ChatClient audio_to_text files keyword bug by @EchterTimo in #26317.
    • Added missing import IO in client.py by @EchterTimo in #26389.
    • Removed FILES_URL in default .yaml settings by @JoJohanse in #26410.
  • Performance & Networking:

    • Improved pooling of httpx clients for requests to code sandbox and SSRF protection by @Blackoutta in #26052.
    • Distributed plugin auto-upgrade tasks with concurrency control by @RockChinQ in #26282.
    • Switched plugin auto-upgrade cache to Redis for reliability by @RockChinQ in #26356.
    • Fixed plugin detail panel not showing when >100 plugins are installed by @JzoNgKVO in #26405.
    • Debounce reference fix for performance stability by @crazywoola in #26433.
  • UI/UX & Display:

    • Fixed lingering display-related issues (translations, UI consistency) by @hjlarry in #26335.
    • Fixed broken CSS animations under Turbopack by naming unnamed animations in CSS modules by @lyzno1 in #26408.
    • Fixed verification code input using wrong maxLength prop by @hyongtao-code in #26244.
    • Fixed array-only filtering in List Operator picker, removed file-children fallback, aligned child types by @Woo0ood in #26240.
    • Fixed translation inconsistencies in ja-JP: “ナレッジベース” vs. “ナレッジの名前とアイコン” by @mshr-h in #26243 and @NeatGuyCoding in #26270.
    • Improved “time from now” i18n support by @hjlarry in #26328.
    • Standardized dataset-pipeline i18n terminology by @lyzno1 in #26353.
  • Code & Components:

    • Refactored component exports for consistency by @ZeroZ-lab in #26033.
    • Refactored router to apply ns.route style by @laipz8200 in #26339.
    • Refactored lint scripts to remove duplication and simplify naming by @lyzno1 in #26259.
    • Applied @console_ns.route decorators to RAG pipeline controllers (internal refactor) by @Copilot in #26348.
    • Added missing type="button" attributes in components by @Copilot in #26249.

Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.9.1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

  • fix(api): graph engine debug logging NodeRunRetryEvent not effective by @quicksandznzn in #26085
  • fix full_text_search name by @JohnJyong in #26104
  • bump nextjs to 15.5 and turbopack for development mode by @17hz in #24346
  • chore: refactor component exports for consistency by @ZeroZ-lab in #26033
  • fix:add some explanation for oceanbase parser selection by @longbingljw in #26071
  • feat(pipeline): add language support to built-in pipeline templates and update related components by @WTW0313 in #26124
  • ci: Add hotfix/** branches to build-push workflow triggers by @QuantumGhost in #26129
  • fix(api): Fix variable truncation for list[File] value in output mapping by @QuantumGhost in #26133
  • one example of Session by @asukaminato0721 in #24135
  • fix(api):LLM node losing Flask context during parallel iterations by @quicksandznzn in #26098
  • fix(search-input): ensure proper value extraction in composition end handler by @yangzheli in #26147
  • delete end_user check by @JohnJyong in #26187
  • improve: pooling httpx clients for requests to code sandbox and ssrf by @Blackoutta in #26052
  • fix: remote filename will be 'inline' if Content-Disposition: inline by @sorphwer in #25877
  • perf: provide X-Dify-Version for marketplace api access by @RockChinQ in #26210
  • Chore/remove add node restrict of workflow by @zxhlyh in #26218
  • Fix array-only filtering in List Operator picker; remove file children fallback and align child types. by @Woo0ood in #26240
  • fix: sync FileUploader context with props to fix inconsistent file parameter state in “View cached variables”. by @Woo0ood in #26199
  • fix: add echarts and zrender to transpilePackages for ESM compatibility by @lyzno1 in #26208
  • chore: fix inaccurate translation in ja-JP by @mshr-h in #26243
  • aliyun_trace: unify the span attribute & compatible CMS 2.0 endpoint by @hieheihei in #26194
  • fix(api): resolve error in agent‑strategy prompt generator by @quicksandznzn in #26278
  • minor: fix translation with the key value uses 「ナレッジの名前とアイコン」 while the rest of the file uses 「ナレッジベース」 by @NeatGuyCoding in #26270
  • refactor(web): simplify lint scripts, remove duplicates and standardize naming by @lyzno1 in #26259
  • fmt first by @asukaminato0721 in #26221
  • fix: resolve UUID parsing error for default user session lookup by @Cluas in #26109
  • Fix: avoid mutating node props by @hyongtao-code in #26266
  • update gen_ai semconv for aliyun trace by @hieheihei in #26288
  • chore: streamline AGENTS.md guidance by @laipz8200 in #26308
  • rm assigned but unused by @asukaminato0721 in #25639
  • Chore/add sec report by @crazywoola in #26313
  • Fix ChatClient.audio_to_text files keyword to make it work by @EchterTimo in #26317
  • perf: distribute concurrent pl...
Read more

1.9.0 – Orchestrating Knowledge, Powering Workflows

22 Sep 12:23
2e2c87c

Choose a tag to compare

knowledge_pipeline

🚀 Introduction

In Dify 1.9.0, we are introducing two major new capabilities: the Knowledge Pipeline and the Queue-based Graph Engine.

The Knowledge Pipeline provides a modularized and extensible workflow for knowledge ingestion and processing, while the Queue-based Graph Engine makes workflow execution more robust and controllable. We believe these will help you build and debug AI applications more smoothly, and we look forward to your experiences to help us continuously improve.


📚 Knowledge Pipeline

✨ Introduction

With the brand-new orchestration interface for knowledge pipelines, we introduce a fundamental architectural upgrade that reshapes how document processing are designed and executed, providing a more modular and flexible workflow that enables users to orchestrate every stage of the pipeline. Enhanced with a wide range of powerful plugins available in the marketplace, it empowers users to flexibly integrate diverse data sources and processing tools. Ultimately, this architecture enables building highly customized, domain-specific RAG solutions that meet enterprises’ growing demands for scalability, adaptability, and precision.

❓ Why Do We Need It?

Previously, Dify's RAG users still encounter persistent challenges in real-world adoption — from inaccurate knowledge retrieval and information loss to limited data integration and extensibility. Common pain points include:

  • 🔗 restricted integration of data sources
  • 🖼️ missing critical elements such as tables and images
  • ✂️ suboptimal chunking results

All of them lead to poor answer quality and hinder the model's overall performance.

In response, we reimagined RAG in Dify as an open and modular architecture, enabling developers, integrators, and domain experts to build document processing pipelines tailored to their specific requirements—from data ingestion to chunk storage and retrieval.

🛠️ Core Capabilities

🧩 Knowledge Pipeline Architecture

The Knowledge Pipeline is a visual, node-based orchestration system dedicated to document ingestion. It provides a customizable way to automate complex document processing, enabling fine-grained transformations and bridging raw content with structured, retrievable knowledge. Developers can build workflows step by step, like assembling puzzle pieces, making document handling easier to observe and adjust.

📑 Templates & Pipeline DSL

template

  • ⚡ Start quickly with official templates
  • 🔄 Customize and share pipelines by importing/exporting via DSL for easier reusability and collaboration

🔌 Customizable Data Sources & Tools

tools

tools-2

Each knowledge base can support multiple data sources. You can seamlessly integrate local files, online documents, cloud drives, and web crawlers through a plugin-based ingestion framework. Developers can extend the ecosystem with new data-source plugins, while marketplace processors handle specialized use cases like formulas, spreadsheets, and image parsing — ensuring accurate ingestion and structured representation.

🧾 New Chunking Strategies

In addition to General and Parent-Child modes, the new Q&A Processor plugin supports Q&A structures. This expands coverage for more use cases, balancing retrieval precision with contextual completeness.

🖼️ Image Extraction & Retrieval

image_in_pdf

Extract images from documents in multiple formats, store them as URLs in the knowledge base, and enable mixed text-image outputs to improve LLM-generated answers.

🧪 Test Run & Debugging Support

Before publishing a pipeline, you can:

  • ▶️ Execute a single step or node independently
  • 🔍 Inspect intermediate variables in detail
  • 👀 Preview string variables as Markdown in the variable inspector

This provides safe iteration and debugging at every stage.

🔄 One-Click Migration from Legacy Knowledge Bases

Seamlessly convert existing knowledge bases into the Knowledge Pipeline architecture with a single action, ensuring smooth transition and backward compatibility.

🌟 Why It Matters

The Knowledge Pipeline makes knowledge management more transparent, debuggable, and extensible. It is not the endpoint, but a foundation for future enhancements such as multimodal retrieval, human-in-the-loop collaboration, and enterprise-level data governance. We’re excited to see how you apply it and share your feedback.


⚙️ Queue-based Graph Engine

❓ Why Do We Need It?

Previously, designing workflows with parallel branches often led to:

  • 🌀 Difficulty managing branch states and reproducing errors
  • ❌ Insufficient debugging information
  • 🧱 Rigid execution logic lacking flexibility

These issues reduced the usability of complex workflows. To solve this, we redesigned the execution engine around queue scheduling, improving management of parallel tasks.

🛠️ Core Capabilities

📋 Queue Scheduling Model

All tasks enter a unified queue, where the scheduler manages dependencies and order. This reduces errors in parallel execution and makes topology more intuitive.

🎯 Flexible Execution Start Points

Execution can begin at any node, supporting partial runs, resumptions, and subgraph invocations.

🌊 Stream Processing Component

A new ResponseCoordinator handles streaming outputs from multiple nodes, such as token-by-token LLM generation or staged results from long-running tasks.

🕹️ Command Mechanism

With the CommandProcessor, workflows can be paused, resumed, or terminated during execution, enabling external control.

🧩 GraphEngineLayer

A new plugin layer that allows extending engine functionality without modifying core code. It can monitor states, send commands, and support custom monitoring.


Quickstart

  1. Prerequisites
    • Dify version: 1.9.0 or higher
  2. How to Enable
    • Enabled by default, no additional configuration required.
    • Debug mode: set DEBUG=true to enable DebugLoggingLayer.
    • Execution limits:
      • WORKFLOW_MAX_EXECUTION_STEPS=500
      • WORKFLOW_MAX_EXECUTION_TIME=1200
      • WORKFLOW_CALL_MAX_DEPTH=10
    • Worker configuration (optional):
      • WORKFLOW_MIN_WORKERS=1
      • WORKFLOW_MAX_WORKERS=10
      • WORKFLOW_SCALE_UP_THRESHOLD=3
      • WORKFLOW_SCALE_DOWN_IDLE_TIME=30
    • Applies to all workflows.

More Controllable Parallel Branches

Execution Flow:

Start ─→ Unified Task Queue ─→ WorkerPool Scheduling
                          ├─→ Branch-1 Execution
                          └─→ Branch-2 Execution
                                  ↓
                            Aggregator
                                  ↓
                                  End

Improvements:
1. All tasks enter a single queue, managed by the Dispatcher.
2. WorkerPool auto-scales based on load.
3. ResponseCoordinator manages streaming outputs, ensuring correct order.

Example: Command Mechanism

from core.workflow.graph_engine.manager import GraphEngineManager

# Send stop command
GraphEngineManager.send_stop_command(
    task_id="workflow_task_123",
    reason="Emergency stop: resource limit exceeded"
)

Note: pause/resume functionality will be supported in future versions.


Example: GraphEngineLayer

GraphEngineLayer Example


FAQ

  1. Is this release focused on performance?
    No. The focus is on stability, clarity, and correctness of parallel branches. Performance improvements are a secondary benefit.

  2. What events can be subscribed to?

    • Graph-level: GraphRunStartedEvent, GraphRunSucceededEvent, GraphRunFailedEvent, GraphRunAbortedEvent
    • Node-level: NodeRunStartedEvent, NodeRunSucceededEvent, NodeRunFailedEvent, NodeRunRetryEvent
    • Container nodes: IterationRunStartedEvent, IterationRunNextEvent, IterationRunSucceededEvent, LoopRunStartedEvent, LoopRunNextEvent, LoopRunSucceededEvent
    • Streaming output: NodeRunStreamChunkEvent
  3. How can I debug workflow execution?

    • Enable DEBUG=true to view detailed logs.
    • Use DebugLoggingLayer to record events.
    • Add custom monitoring via GraphEngineLayer.

Future Plans

This release is just the beginning. Upcoming improvements include:

  • Debugging Tools: A visual interface to view execution states and variables in real time.
  • Intelligent Scheduling: Optimize scheduling strategies using historical data.
  • More Complete Command Support: Add Pause/Resume, breakpoint debugging.
  • Human in the Loop: Support human intervention during execution.
  • Subgraph Functionality: Enhance modularity and reusability.
  • Multimodal Embedding: Support richer content types beyond text.

We look forward to your feedback and experiences to make the engine more practical.


Upgrade Guide

Important

After upgrading, you must run the following migration to transform existing datasource credentials. This step is required to ensure compatibility with the new version:

uv run flask transform-datasource-credentials

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s...
Read more

v2.0.0-beta.2

08 Sep 07:20
2a84832

Choose a tag to compare

v2.0.0-beta.2 Pre-release
Pre-release

Fixes

  • Fixed an issue in Workflow / Chatflow where using an LLM node with Memory could cause errors.
  • Fixed a blocking issue in non-pipeline mode when adding new Notion pages to the document list.
  • Fixed dark mode styling issues.

Upgrade Guide

Important

If upgrading from 0.x or 1.x, you must run the following migration to transform existing datasource credentials. This step is required to ensure compatibility with the new version:

uv run flask transform-datasource-credentials

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  1. Get the latest code from the main branch
git checkout 2.0.0-beta.2
git pull origin 2.0.0-beta.2
  1. Stop the service. Please execute in the docker directory
docker compose down
  1. Back up data
tar -cvf volumes-$(date +%s).tgz volumes
  1. Upgrade services
docker compose up -d
  1. Migrate data after the container starts
docker exec -it docker-api-1 uv run flask transform-datasource-credentials

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

git checkout 2.0.0-beta.2
  1. Update Python dependencies:
cd api
uv sync
  1. Then, let's run the migration script:
uv run flask db upgrade
uv run flask transform-datasource-credentials
  1. Finally, run the API server, Worker, and Web frontend Server again.

v2.0.0-beta.1 – Orchestrating Knowledge, Powering Workflows

04 Sep 13:38
fae6d4f

Choose a tag to compare

knowledge_pipeline

🚀 Introduction

In Dify 2.0, we are introducing two major new capabilities: the Knowledge Pipeline and the Queue-based Graph Engine.

This is a beta release, and we hope to explore these improvements together with you and gather your feedback. The Knowledge Pipeline provides a modularized and extensible workflow for knowledge ingestion and processing, while the Queue-based Graph Engine makes workflow execution more robust and controllable. We believe these will help you build and debug AI applications more smoothly, and we look forward to your experiences to help us continuously improve.


📚 Knowledge Pipeline

✨ Introduction

With the brand-new orchestration interface for knowledge pipelines, we introduce a fundamental architectural upgrade that reshapes how document processing are designed and executed, providing a more modular and flexible workflow that enables users to orchestrate every stage of the pipeline. Enhanced with a wide range of powerful plugins available in the marketplace, it empowers users to flexibly integrate diverse data sources and processing tools. Ultimately, this architecture enables building highly customized, domain-specific RAG solutions that meet enterprises’ growing demands for scalability, adaptability, and precision.

❓ Why Do We Need It?

Previously, Dify's RAG users still encounter persistent challenges in real-world adoption — from inaccurate knowledge retrieval and information loss to limited data integration and extensibility. Common pain points include:

  • 🔗 restricted integration of data sources
  • 🖼️ missing critical elements such as tables and images
  • ✂️ suboptimal chunking results

All of them lead to poor answer quality and hinder the model's overall performance.

In response, we reimagined RAG in Dify as an open and modular architecture, enabling developers, integrators, and domain experts to build document processing pipelines tailored to their specific requirements—from data ingestion to chunk storage and retrieval.

🛠️ Core Capabilities

🧩 Knowledge Pipeline Architecture

The Knowledge Pipeline is a visual, node-based orchestration system dedicated to document ingestion. It provides a customizable way to automate complex document processing, enabling fine-grained transformations and bridging raw content with structured, retrievable knowledge. Developers can build workflows step by step, like assembling puzzle pieces, making document handling easier to observe and adjust.

📑 Templates & Pipeline DSL

template

  • ⚡ Start quickly with official templates
  • 🔄 Customize and share pipelines by importing/exporting via DSL for easier reusability and collaboration

🔌 Customizable Data Sources & Tools

tools

tools-2

Each knowledge base can support multiple data sources. You can seamlessly integrate local files, online documents, cloud drives, and web crawlers through a plugin-based ingestion framework. Developers can extend the ecosystem with new data-source plugins, while marketplace processors handle specialized use cases like formulas, spreadsheets, and image parsing — ensuring accurate ingestion and structured representation.

🧾 New Chunking Strategies

In addition to General and Parent-Child modes, the new Q&A Processor plugin supports Q&A structures. This expands coverage for more use cases, balancing retrieval precision with contextual completeness.

🖼️ Image Extraction & Retrieval

image_in_pdf

Extract images from documents in multiple formats, store them as URLs in the knowledge base, and enable mixed text-image outputs to improve LLM-generated answers.

🧪 Test Run & Debugging Support

Before publishing a pipeline, you can:

  • ▶️ Execute a single step or node independently
  • 🔍 Inspect intermediate variables in detail
  • 👀 Preview string variables as Markdown in the variable inspector

This provides safe iteration and debugging at every stage.

🔄 One-Click Migration from Legacy Knowledge Bases

Seamlessly convert existing knowledge bases into the Knowledge Pipeline architecture with a single action, ensuring smooth transition and backward compatibility.

🌟 Why It Matters

The Knowledge Pipeline makes knowledge management more transparent, debuggable, and extensible. It is not the endpoint, but a foundation for future enhancements such as multimodal retrieval, human-in-the-loop collaboration, and enterprise-level data governance. We’re excited to see how you apply it and share your feedback.


⚙️ Queue-based Graph Engine

❓ Why Do We Need It?

Previously, designing workflows with parallel branches often led to:

  • 🌀 Difficulty managing branch states and reproducing errors
  • ❌ Insufficient debugging information
  • 🧱 Rigid execution logic lacking flexibility

These issues reduced the usability of complex workflows. To solve this, we redesigned the execution engine around queue scheduling, improving management of parallel tasks.

🛠️ Core Capabilities

📋 Queue Scheduling Model

All tasks enter a unified queue, where the scheduler manages dependencies and order. This reduces errors in parallel execution and makes topology more intuitive.

🎯 Flexible Execution Start Points

Execution can begin at any node, supporting partial runs, resumptions, and subgraph invocations.

🌊 Stream Processing Component

A new ResponseCoordinator handles streaming outputs from multiple nodes, such as token-by-token LLM generation or staged results from long-running tasks.

🕹️ Command Mechanism

With the CommandProcessor, workflows can be paused, resumed, or terminated during execution, enabling external control.

🧩 GraphEngineLayer

A new plugin layer that allows extending engine functionality without modifying core code. It can monitor states, send commands, and support custom monitoring.


Quickstart

  1. Prerequisites
    • Dify version: 2.0.0-beta.1 or higher
  2. How to Enable
    • Enabled by default, no additional configuration required.
    • Debug mode: set DEBUG=true to enable DebugLoggingLayer.
    • Execution limits:
      • WORKFLOW_MAX_EXECUTION_STEPS=500
      • WORKFLOW_MAX_EXECUTION_TIME=1200
      • WORKFLOW_CALL_MAX_DEPTH=10
    • Worker configuration (optional):
      • WORKFLOW_MIN_WORKERS=1
      • WORKFLOW_MAX_WORKERS=10
      • WORKFLOW_SCALE_UP_THRESHOLD=3
      • WORKFLOW_SCALE_DOWN_IDLE_TIME=30
    • Applies to all workflows.

More Controllable Parallel Branches

Execution Flow:

Start ─→ Unified Task Queue ─→ WorkerPool Scheduling
                          ├─→ Branch-1 Execution
                          └─→ Branch-2 Execution
                                  ↓
                            Aggregator
                                  ↓
                                  End

Improvements:
1. All tasks enter a single queue, managed by the Dispatcher.
2. WorkerPool auto-scales based on load.
3. ResponseCoordinator manages streaming outputs, ensuring correct order.

Example: Command Mechanism

from core.workflow.graph_engine.manager import GraphEngineManager

# Send stop command
GraphEngineManager.send_stop_command(
    task_id="workflow_task_123",
    reason="Emergency stop: resource limit exceeded"
)

Note: pause/resume functionality will be supported in future versions.


Example: GraphEngineLayer

GraphEngineLayer Example


FAQ

  1. Is this release focused on performance?
    No. The focus is on stability, clarity, and correctness of parallel branches. Performance improvements are a secondary benefit.

  2. What events can be subscribed to?

    • Graph-level: GraphRunStartedEvent, GraphRunSucceededEvent, GraphRunFailedEvent, GraphRunAbortedEvent
    • Node-level: NodeRunStartedEvent, NodeRunSucceededEvent, NodeRunFailedEvent, NodeRunRetryEvent
    • Container nodes: IterationRunStartedEvent, IterationRunNextEvent, IterationRunSucceededEvent, LoopRunStartedEvent, LoopRunNextEvent, LoopRunSucceededEvent
    • Streaming output: NodeRunStreamChunkEvent
  3. How can I debug workflow execution?

    • Enable DEBUG=true to view detailed logs.
    • Use DebugLoggingLayer to record events.
    • Add custom monitoring via GraphEngineLayer.

Future Plans

This beta release is just the beginning. Upcoming improvements include:

  • Debugging Tools: A visual interface to view execution states and variables in real time.
  • Intelligent Scheduling: Optimize scheduling strategies using historical data.
  • More Complete Command Support: Add Pause/Resume, breakpoint debugging.
  • Human in the Loop: Support human intervention during execution.
  • Subgraph Functionality: Enhance modularity and reusability.
  • Multimodal Embedding: Support richer content types beyond text.

We look forward to your feedback and experiences to make the engine more practical.


Upgrade Guide

Important

After upgrading, you must run the following migration to transform existing datasource credentials. This step is required to ensure compatibility with the new version:

uv run flask transform-datasource-credentials

Docker Compose Deployments

  1. Back up your cus...
Read more

v1.8.1

03 Sep 11:07
c7700ac

Choose a tag to compare

🌟 What's New in v1.8.1? 🌟

Welcome to version 1.8.1! 🎉🎉🎉 This release focuses on stability, performance improvements, and developer experience enhancements. We've built great features and resolved critical database issues based on community feedback.

🚀 Features

  • Export DSL from History: Able to export workflow DSL directly from version history panel. (See #24939, by GuanMu)
  • Downvote with Reason: Enhanced feedback system allowing users to provide specific reasons when downvoting responses. (See #24922, by jubinsoni)
  • Multi-modal/File: Added filename support to multi-modal prompt messages. (See #24777, by -LAN-)
  • Advanced Chat File Handling: Improved assistant content parts and file handling in advanced chat mode. (See #24663, by QIN2DIM)

⚡ Enhancements

  • DB Query: Optimized SQL queries that were performing partial full table scans. (See #24786, by Novice)
  • Type Checking: Migrated from MyPy to Basedpyright. (See #25047, by -LAN-)
  • Indonesian Language Support: Added Indonesian (id-ID) language support. (See #24951, by lyzno1)
  • Jinja2 Template: LLM prompt Jinja2 templates now support more variables. (See #24944, by 17hz)

🐛 Fixes

  • Security/XSS: Fixed XSS vulnerability in block-input and support-var-input components. (See #24835, by lyzno1)
  • Persistence Session Management: Resolved critical database session binding issues that were causing "not bound to a Session" errors. (See #25010, #24966, by Will)
  • Workflow & UI Issues: Fixed workflow publishing problems, resolved UUID v7 conflicts, and addressed various UI component issues including modal handling and input field improvements. (See #25030, #24643, #25034, #24864, by Will, -LAN-, 17hz & Atif)

Version 1.8.1 represents a significant step forward in platform stability and developer experience. The migration to modern type checking and database systems, combined with comprehensive bug fixes, creates a more robust foundation for future features.

Huge thanks to all our contributors who made this release possible! We welcome your ongoing feedback to help us continue improving the platform together.


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.8.1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more

v1.8.0 - Async workflows meet multi-model management with OAuth-powered integrations.

27 Aug 07:27
f048444

Choose a tag to compare

Image

🎉 Dify v1.8.0 Release Notes 🎉

Hello, Dify community! We're excited to bring you version 1.8.0, packed with significant improvements across the board - from enhanced security and performance optimizations to a revamped UI and powerful new workflow features. Let's dive into what's new!

🚀 New Features

Workflow & Agent Capabilities

  • Multi-Model Credentials System: Implemented a comprehensive multi-model credentials system with new database tables, enabling more flexible model management. Thanks to @hjlarry! (#24451)
  • MCP Support with OAuth: Added Model Context Protocol (MCP) support for resource discovery with OAuth authentication, expanding integration possibilities. Kudos to @CodeSpaceiiii! (#24223)
  • Default Values for Workflow Variables: All workflow start node variable types now support default values, making workflows more robust. Thanks to @17hz! (#24129)
  • Agent Node Token Usage: Exposed agent node usage metrics for better monitoring and optimization. Thanks to @DavideDelbianco! (#24355)

UI/UX Enhancements

  • Document Sorting in Knowledge Base: Added sorting functionality for document status in the Knowledge base, improving document management. Thanks to @jubinsoni! (#24252)
  • Delete Avatar Functionality: Users can now delete their avatars with a confirmation modal for safety. Thanks to @Zhehao-P! (#24099)
  • Extensible Goto-Anything Commands: Improved goto-anything commands with an extensible architecture for better navigation. Thanks to @ZeroZ-lab! (#24091)
  • Document Name Tooltips: Added helpful tooltips to document names in lists for better visibility. Thanks to @aopstudio! (#24467)
  • Auto-login After Setup: Implemented secure auto-login after admin account setup. Thanks to @laipz8200! (#24395)

API & Backend

  • Redis SSL/TLS Authentication: Added support for Redis SSL/TLS certificate authentication for enhanced security. Thanks to @laipz8200! (#23624)
  • Flask-RESTX Migration: Successfully migrated from Flask-RESTful to Flask-RESTX for better API documentation and structure. Thanks to @asukaminato0721! (#24310)
  • Swagger Authorization: Added authorization configuration support to Swagger documentation. Thanks to @hjlarry! (#24518)

🐛 Bug Fixes

Critical Fixes

  • Database Performance: Fixed major performance issue by removing provider table updates on every message creation. Thanks to @QuantumGhost! (#24520)
  • Authentication Error Handling: Fixed login error handling by properly raising exceptions instead of returning. Thanks to @laipz8200! (#24452)
  • OAuth Redis Compatibility: Resolved OAuth Redis compatibility issues. Thanks to @Mairuis! (#23959)
  • HTTP Request Node File Access: Fixed file access from Start Node with remote URLs in HTTP Request Node. Thanks to @dlmu-lq! (#24293)

Workflow Improvements

  • Loop Exit Conditions: Fixed loop exit condition to accept variables from nodes inside loops. Thanks to @baonudesifeizhai! (#24257)
  • Agent Node Token Counting: Properly separated prompt and completion tokens in agent node token counting. Thanks to @laipz8200! (#24368)
  • Number Input in Tool Configure: Fixed number input behavior in agent node tool configuration. Thanks to @Stream29! (#24152)
  • Delete Conversations via API: Fixed conversation deletion through API to properly remove from database. Thanks to @jubinsoni! (#23591)

UI/UX Fixes

  • Dark Mode Improvements: Multiple dark mode fixes including backdrop-blur for plugin dropdowns, hover button contrast, and embedded modal icons. Thanks to @lyzno1 and team!
  • React Warnings: Fixed Next.js React warnings by properly moving shareCode updates to useEffect. Thanks to @Eric-Guo! (#24468)
  • Border Radius Consistency: Fixed UI border radius inconsistencies across components. Thanks to @jubinsoni! (#24486)

🔒 Security Enhancements

  • User Enumeration Prevention: Standardized authentication error messages to prevent user enumeration attacks. Thanks to @laipz8200! (#24324)
  • Custom Headers Fix: Fixed custom headers being ignored when using bearer or basic authorization. Thanks to @liugddx! (#23584)
  • Fix SQL Injection in Oracle VDB.

⚡ Performance & Infrastructure

Workflow Performance Breakthrough

  • Async WorkflowRun/WorkflowNodeRun Repositories: Implemented asynchronous repositories for workflow execution, delivering dramatic performance improvements. This architectural change enables non-blocking operations during workflow runs, with early testing showing execution times nearly halved in typical workflows. This optimization particularly benefits complex workflows with multiple nodes and parallel operations. Thanks to @xinlmain for this game-changing performance enhancement! (#20050)

Database Optimizations

  • Semantic Version Comparison: Implemented semantic version comparison for vector database version checks. Thanks to @MatriQ! (#24416)
  • AnalyticDB Improvements: Fixed rollback issues when AnalyticDB create zhparser failed. Thanks to @lpdink! (#24260)
  • Dataset Cleanup: Optimized dataset cleanup task for better performance. Thanks to @aopstudio! (#24467)

Testing Infrastructure

  • Comprehensive Test Coverage: Added testcontainers-based integration tests for multiple services including workflow app, website, auth, conversation, and more. Massive thanks to @NeatGuyCoding for this extensive testing effort!
  • Rate Limiting Tests: Added comprehensive test suite for rate limiting module. Thanks to @farion1231! (#23765)

Docker & Deployment

  • Docker Build Optimization: Optimized Docker build process with cleanup script for Jest work files. Thanks to @WTW0313! (#24450)
  • Amazon ECS Deployment: Added deployment pattern documentation using Amazon ECS and CDK. Thanks to @tmokmss! (#23985)
  • Configurable Plugin Buffer Sizes: Added configurable stdio buffer sizes for plugins in compose file. Thanks to @crazywoola! (#23980)

📚 Documentation

  • CLAUDE.md for LLM Development: Added comprehensive CLAUDE.md file for LLM-assisted development guidance. Thanks to @laipz8200! (#23946)
  • API Documentation: Enhanced API documentation for files endpoint, MCP, and service API. Thanks to @laipz8200!
  • Localized Documentation: Updated localized README files to link to corresponding localized CONTRIBUTING.md files. Thanks to @aopstudio! (#24504)
  • Markdown Auto-formatting: Implemented auto-formatting for markdown files using mdformat tool. Thanks to @asukaminato0721! (#24242)

🧹 Code Quality & Refactoring

  • Type Safety Improvements: Major improvements to type annotations and static type checking across the codebase. Thanks to @Gnomeek, @hyongtao-code, and @asukaminato0721!
  • AST-Grep Integration: Added ast-grep tool for maintaining codebase consistency. Thanks to @asukaminato0721! (#24149)
  • Dead Code Removal: Cleaned up empty files and unused code throughout the project. Thanks to @hyongtao-code! (#23990)
  • Import Optimization: Replaced deprecated functions and optimized imports across the codebase.

🌐 Internationalization

  • Automated Translation Updates: Continuous updates to i18n translation files with improved accuracy
  • Japanese Translation Corrections: Fixed Japanese translation issues. Thanks to @kurokobo! (#24041)
  • Translation Synchronization: Better synchronization of translations across all supported languages

This release represents a major step forward in Dify's evolution, with substantial improvements to performance, security, and developer experience. We're particularly excited about the enhanced workflow capabilities and the comprehensive testing infrastructure that will help us maintain high quality standards going forward.

Thank you to all contributors who made this release possible! Your dedication to improving Dify continues to drive us forward.

Happy building with Dify 1.8.0! 🚀


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.8.0
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more

v1.7.2

11 Aug 09:26
1.7.2
0baccb9

Choose a tag to compare

✨ What’s New in v1.7.2? ✨

Alright folks, buckle up! Version 1.7.2 is here, packed with a ton of quality-of-life improvements, bug fixes, and some slick new features to make your Dify experience even smoother. This release has been a community effort, and we want to give a big shoutout to all the contributors, especially the new folks who jumped in – welcome to the party! 🎉

🚀 Major Feature: Workflow Visualization

A new relations panel allows you to visualize dependencies within your workflows. Big thanks to @Minamiyama for #21998! Now when you select any node and press Shift, you will see magic flowing lines.

image

🚀 Major Feature: Node Search

You can now easily find nodes in the workflow editor using the new search feature by @croatialu, @ZeroZ-lab, @HyaCiovo, @MatriQ, @lyzno1, @crazywoola in #23685.

image

⚙️ Enhancements

  • Notion Database Row Extraction: The Notion Database integration now extracts rows in their original order and appends the Row Page URL. Thanks @ThreeFish-AI! #22646
  • Workflow API Version Specification: You can now specify workflow versions in the workflow and chat APIs. Thanks, @qiaofenlin! #23188
  • Tool JSON Response: Datetime and UUID are now supported in tool JSON responses, making those integrations even more powerful. Kudos to @jiangbo721! #22738
  • API Documentation: The API documentation has been revamped with a modern design and improved UX. Thanks @lyzno1! #23490
  • Workflow Node Alignment: Get those workflows looking sharp with enhanced node alignment options. Thanks, @ZeroZ-lab! #23451
  • Service API File Preview Endpoint: A new endpoint to preview service API files, making it easier to manage and debug your services. Hat tip to @lyzno1! #23534
  • Testcontainers Tests: We're serious about stability! @NeatGuyCoding and others have been hard at work adding Testcontainers tests for various services (account, app, message, workflow etc.) ensuring our services are rock solid.

🛠️ Bug Fixes

  • Full-Text Search with Tencent Cloud VectorDB: Fixed an issue where metadata filters weren't being applied correctly in full-text search mode for Tencent Cloud VectorDB. Thanks, @dlmu-lq! #23564
  • Workflow Knowledge Retrieval Cache: Fixed a cache bug in workflow knowledge retrieval. Another one bites the dust, thanks to @yunqiqiliang! #23597
  • HTTP Request Component: Resolved a multipart/form-data boundary issue in the HTTP Request component. Thanks to @baonudesifeizhai for fixing this long-standing issue! #23008
  • Conversation Variable Sync: Fixed an issue where conversation variables weren't being synced for existing conversations. Thanks to @laipz8200 for hunting this down! #23649
  • Internationalization (i18n): Numerous i18n fixes and enhancements across the board. Shoutout to @lyzno1 and the i18n team for their dedication!
  • Edge Cases Handled: We squashed a number of edge-case bugs, thanks to the contributions of many in the community.

🛡️ Security

  • XSS Vulnerability: A big thank you to @lyzno1 for identifying and fixing an XSS vulnerability in the authentication check-code pages. #23295

Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.7.2
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more

v1.7.1

28 Jul 12:50
1.7.1
0d2d349

Choose a tag to compare

🎉 Dify v1.7.1 Release Notes 🎉

Hello, Dify enthusiasts! We're thrilled to announce version 1.7.1 of our platform, bringing a fresh batch of refinements and enhancements to your workflow. Here's a breakdown of what's changed:

🚀 New Features

  • Default Value for Select Inputs: Now you can set a default value for select input fields, providing a smoother user experience when working with forms. Thanks to @antonko. (#21192)

  • Selecting Variables in Conditional Filters: We've added the capability to select variables in conditional filtering within list operations. This feature, spearheaded by @leslie2046, will streamline data manipulation tasks. (#23029)

  • OpenAPI Schema Enhancement: Support for allOf in OpenAPI properties inside schema has been added, courtesy of @mike1936. It's a big win for API design consistency. (#22975)

  • K8s Pure Migration Option: We've introduced a pure migration option for the api component within Kubernetes deployments, making migrations simpler for large-scale systems. Thanks, @BorisPolonsky ! (#22750)

⚙️ Bug Fixes

  • Langfuse Integration Path: Incorrect path handling with Langfuse integration has been corrected by @chenguowei. Now it behaves just right within your API calls. (#22766)

  • CELERY_BROKER Improvements: For those using RabbitMQ, the broker handling issue during batch document segment additions has been addressed by @zhaobingshuang. No more endless processing status! (#23038)

  • Metadata Batch Edit Cross-page Issue: Resolved a previous issue with cross-page document selection during metadata batch edits. Thanks to @liugddx for smoothing out the workflow. (#23000)

  • Windows PEM KeyPath Fix: Corrected path errors for private.pem key files on Windows systems, ensuring cross-platform reliability. Thanks to @silencesdg. (#22814)

🔄 Improvements

  • ToolTip Component Refinement: We've refined the interaction of ToolTip components within menus to enhance readability and usability. Kudos to @HyaCiovo for this optimization. (#23023)

  • PostgreSQL Healthcheck: Enhanced the healthcheck command to avoid fatal log errors in PostgreSQL. Thanks to @J2M3L2's talismanic touch. (#22749)

  • Time Formatting Internationalization: The time formatting feature has been refactored for better international support, thanks to @HyaCiovo. (#22870)

🪄 Miscellaneous

  • Revamped Tool List Page: @nite-knite made the tool list page slicker and more user-friendly—check it out! (#22879)

  • Duplicate TYPE_CHECKING Import: Removed those unnecessary imports for sleeker code. Thanks, @hyongtao-db. (#23013)

Pulling all these improvements together, this release takes a big step forward in polishing everyday experiences and paving the way for future development. Enjoy the upgrade, and as always, reach out with feedback and ideas for what you'd love to see next. Keep coding! 🚀


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.7.1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more