From the course: Build with AI: Running Local Workflows with n8n
Unlock this course with a free trial
Join today to access over 25,300 courses taught by industry experts.
Optimizing performance
From the course: Build with AI: Running Local Workflows with n8n
Optimizing performance
Quick recap. As you have seen, we made our workflow more accurate and gave ourselves more options to control by adding multiple steps to the workflow. Instead of calling the LLM once, we called it multiple times for different values we wanted to extract from the document. This approach gives us better performance, but at the same time, it also results in our workflows to take much longer to complete. And since we are running it locally, there's no easy way to scale this out, which means to accelerate the workflow by temporarily adding more computers to it, something which is very common in cloud scenarios. Therefore, we can conclude that on local AI setups, there's typically a trade-off between speed and performance. Since our computational power is more or less fixed, we usually have to decide whether we want to wait a little longer for a better result or whether we prioritize a quick result, which might be less accurate. That's also the reason why I prefer workflows that can run in…