In our previous piece, "Age of Agents", we shared our insights on AI Agents, what they are, and why they matter. Agents show a lot of promise to unlock efficiency, automate complex processes, and free human experts to focus on high-value tasks. Yet, we see that designing reliable and production-ready agentic workflows is more nuanced than deploying conventional automation.
At Faktion, we’ve successfully turned ambitious agent concepts into measurable results. For example:
- For our client in property management, we implemented a multi-agent system which transcribes, summarises tenants' meetings and cuts manual effort to prepare reports by 80%
- For an editorial service provider, agents work together to structure books in formatted files, translate them, generate alt text for images and turn them into audiobooks; a process that was done fully manually before.
Our experience shows successful AI agents rarely operate alone. Instead, they thrive within structured, modular "agentic workflows" dynamic systems that guide task delegation, coordination, and completion across specialised agents and business processes.
Let’s deep dive into our comprehensive, actionable Six-Step Framework for Agent Workflow Design, built on proven patterns from real-world enterprise deployments:
- Laying the Groundwork
- Define Agents and Choose Their Types
- Equip Agents with the Right Tools and Data
- Choosing the Right Orchestration Model
- Visualise & Capture Feedback
- Build, Test, Iterate
And to make this tangible, in the last section, we will apply this framework to build an AI agentic workflow for the HR Employee Onboarding process of a large organisation.
The Six-Step Framework for Agent Workflow Design
Step 1: Laying the Groundwork
Begin by clearly defining what your agent workflow needs to accomplish. This foundational stage involves:
- Mapping the end-to-end process (with the specific tasks that need to be done in each step) you intend to delegate to agents.
- Identifying workflow triggers, tools currently being used and expected outcomes.
- Cataloguing required data and contextual information.
This clear overview will serve as the blueprint for defining the types of agents, tools, and orchestration patterns you'll need in the next steps.
This blueprint also acts as your vision of the agentic workflow moving forward.
Step 2: Define Agents and Choose Their Types
After mapping the current process in Step 1, we now transition to designing the future state by defining agents that will take ownership of the key tasks identified. This is where we move from documentation to transformation, determining which tasks can be delegated to AI agents and what type of agent is best suited for each task.
Why Agent Types Matter
Not all agents are created equal. Agent types define the behavioural blueprint of each agent — how it acts, what it’s optimised for, how it interacts with users or systems, and what capabilities it should possess. Without clearly assigning types, you'll end up with vague, monolithic agents that are hard to reason about, extend, or evaluate.
Typical agent types include
Research Agents (RAG)
These agents excel at retrieving and synthesising information from extensive knowledge bases and documents. Key capabilities include: contextual information retrieval, multi-source synthesis, knowledge base integration, and citation and source tracking. Typical use cases include policy lookup, compliance research, FAQ systems, and knowledge management.
Action Executing Agents
These agents directly execute specific actions on demand, transforming user intent into immediate system changes. Key capabilities include: direct system integration, transaction processing, validation and confirmation, and rollback capabilities.
Typical use cases include enrolment systems, account provisioning, ticket creation, and automated scheduling.
Conversational Agents
These agents focus on maintaining natural, context-aware dialogue with users, where the conversation itself is the primary goal. Key capabilities include: context retention, personality and tone management, empathetic response generation, and topic navigation.
Typical use cases include virtual assistants, employee support chatbots, onboarding companions, and mental health support systems.
Structured Output Agents
These agents transform unstructured inputs into well-defined JSON schemas or structured formats. Key capabilities include: schema validation, data extraction, format standardisation, and error handling.
Typical use cases include form processing, data migration, report generation, and system integration tasks.
Simple Task Agents
These agents execute single, well-defined tasks with clear inputs and outputs. Key capabilities include: task specialisation, batch processing, predictable performance, and minimal configuration.
Typical use cases include translation, summarisation, sentiment analysis, data validation, and classification tasks.
These types represent the building blocks of your agent ecosystem. The actual set of agents and the precise tasks they will perform will be shaped through ongoing discovery and collaboration with operational teams.
Step 3: Equip Agents with the Right Tools and Data
Agents must have structured, secure access to high-quality data sources and tools. Critical considerations include:
- Implementing controlled interfaces (e.g., via a secure backend) for agent data access.
- Clearly defining data access permissions to maintain security and compliance.
- Ensuring data sources provide rich contextual information for accurate agent responses.
A structured data framework is essential for agents to reason effectively and produce reliable outcomes.
Step 4: Choosing the Right Orchestration Model
Now that we’ve identified the key agents and their tools, the next step is to define how these agents will interact with each other and the user and domain experts, in other words, we need to design the orchestration architecture.
This involves more than just connecting agents. We must ask:
- How are tasks routed to the right agent?
- Where and when the agents need to involve domain experts for their input or validation?
- Who (or what) decides which agent is responsible for a given user query or backend workflow?
- Are certain agents logically grouped together under a shared orchestration layer (e.g., all BPO-related agents managed by a payroll orchestrator)?
- Can we introduce modular orchestration patterns — from simple pipelines to central planners — based on domain maturity and complexity?
Important orchestration considerations include:
- Centralised vs. Decentralised: Whether to route tasks via a central planner or peer-to-peer collaboration.
- Dynamic vs. Fixed Workflows: Whether workflows are fixed pipelines or adapt dynamically based on context and risk level.
- Embedded vs. Standalone Orchestration Logic: Determining where orchestration logic resides within the system.
- Human in the loop: at which stage(s), under what circumstance(s) and to what capacity domain experts need to be involved
This architectural thinking enables us to envision the full agent tree: a system where agents are structured hierarchically or modularly, grouped by function or domain, and dynamically activated based on task type, user need, or workflow context.
Step 5: Visualise & Capture Feedback
Clearly visualise agent interactions using diagrams, mock-ups, and UI wireframes. Visualisation enables you to involve stakeholders, capture their feedback, and make sure they understand:
- How agents interact with users and systems.
- The expected user experience.
- Alignment with intended workflows and operational needs.
Visualisation is key to stakeholder buy-in and effective system development.
Step 6: Build, Test, Iterate
Now that the vision, tools, data, and orchestration models have been defined, it’s time to bring the agent workflows to life and test and iterate on them until they achieve the expected quality metrics, user adoption and business goals. This is where Evaluation Driven Development shines; it ensures that the agentic workflows are not only functional but also continuously optimised for performance, reliability, and alignment with user needs and business goals through rigorous building, testing, and iteration.
This stage is all about execution, validation, and refinement. The building, testing, and iterative improvement processes ensure that the agent workflows are not only functional but continuously optimised for performance, reliability, and alignment with user needs.
Build
Building the agent workflows involves transforming the high-level design into a working solution. During this phase, developers and system integrators should focus on translating specifications into code, configuring systems, and integrating the necessary data sources.
The architecture should prioritise flexibility and scalability, allowing the system to evolve with changing business needs. In the building phase, EDD ensures agents and workflows are not just developed to specification but designed explicitly for continuous evaluation.
Key activities in this stage include:
- Developing the Agent Logic: Codifying the workflow steps and defining how agents interact with users and other systems.
- Integrating Data Sources: Ensuring that agents have access to the right information, whether from enterprise systems or external services.
- Configuring Orchestration: Implementing the process logic that determines how agents interact with each other and how workflows are executed.
- Implementing evaluation agents: Specialised agents continuously assess workflow accuracy, speed, and reliability, ensuring immediate detection of issues or drift.
- Establishing observability: Integrating real-time monitoring and analytics to capture agent performance, user interactions, and system health metrics.
- Creating User Interfaces (UI): If applicable, designing the frontend components that interact with users, providing them with an intuitive experience and interfaces for giving explicit feedback.
Test
Once the agent workflows are built, thorough testing is essential to verify that the workflows meet business requirements, function properly under various scenarios, and are free from critical bugs. This phase should include multiple layers of testing to identify and correct issues before deployment.
It is crucial to involve your domain experts in this process. By giving them the tools to test workflows, it ensures the system is perfected and aligns with real-world needs. The expertise and feedback experts provide is invaluable in validating the workflows and making necessary refinements.
Key testing activities include:
- Unit Testing: Testing individual components and logic to ensure they function correctly in isolation.
- Integration Testing: Verifying that the system's components interact seamlessly, especially with third-party systems or APIs.
- User Acceptance Testing (UAT): Involving end-users or business stakeholders to validate whether the workflows align with their expectations and needs.
- Automated agent benchmarks: Evaluation agents perform regular checks, flagging deviations from expected performance and identifying emerging risks.
- Performance Testing: Ensuring the agent workflows can handle expected loads and scale as needed.
- Domain-expert validation: Expert reviews of critical workflow outputs provide valuable corrective signals, reinforcing system accuracy and compliance.
- Security and Compliance Testing: Validating that the agent workflows are secure, protecting user data, and complying with relevant regulations.
Iterate
Building and testing are not one-off activities but part of an ongoing process. The iterative approach allows for constant improvement, ensuring that the agent workflows remain effective and aligned with evolving business goals. Feedback from testing, user experiences, and performance metrics should inform ongoing updates and refinements.
Key iterative activities include:
- Gathering Feedback: Collecting insights from users, stakeholders, and system performance to identify areas for improvement.
- Refining the Workflow: Adjusting agent behaviour, optimising performance, and enhancing the user interface based on feedback and new requirements.
- Updating Tools and Data Sources: As new data becomes available or systems evolve, updating the tools and data that power the workflows ensures they stay relevant and effective.
- Continuous Monitoring: Using analytics and real-time monitoring tools to track how agents perform post-deployment, allowing for quick identification of issues and areas for optimisation.
Building, testing, and iterating are cyclical, ensuring the system can evolve, respond to new challenges, and continuously deliver value
Example - Agentic Employee Onboarding Process
Now it’s a good opportunity to make it tangible and apply this framework to build an Agentic workflow for the HR Employee Onboarding process of a large organisation. The employee onboarding process in a large organisation involves multiple teams, numerous tasks, and demands consistency, accuracy, and speed.
Applying step 1 to the Employee Onboarding Process:
Let’s start with establishing the groundwork and defining the blueprint we need for the next steps.
Applying Step 2 to the Employee Onboarding Process:
Taking the key tasks identified in Step 1, we now define agents and their types:
By clearly defining agent types in Step 2, we create a blueprint that guides the technical implementation, integration requirements, and user experience design in subsequent steps.
Applying step 3 to the Employee Onboarding Process:
Now let’s list all the tools and data sources that our AI agents need to play their role effectively and build the proper integrations.
- HR Platform Integration: Controlled access to employee records, organisational structure, and role definitions through secure APIs of existing HR platforms (Workday, LMS, ATS).
- Identity Management: Integration with Active Directory and single sign-on systems for seamless access provisioning.
- Learning Systems: Connection to LMS, skill assessment platforms, and certification databases
- Facilities Management: Integration with space planning, equipment inventory, and security badge systems
- Compliance Database: Access to regulatory requirements, policy updates, and training mandates
Applying step 4 to the Employee Onboarding Process:
Hybrid Centralised-Modular Orchestration Architecture:
- A central HR Orchestrator Agent oversees the onboarding process.
- It routes tasks such as IT provisioning, training enrollment, and feedback collection to specialised agents.
- Human-in-the-loop: HR and managers are notified at specific checkpoints, especially for critical or non-standard situations.
Applying step 5 to the Employee Onboarding Process:
Visualise clearly through flow diagrams, mock-ups of onboarding dashboards, chatbots, or conversational UIs. Using these tools, we start capturing feedback from stakeholders before we start making the full-blown solution.
Conclusion
Successfully integrating AI agents into enterprise workflows requires a structured, rigorous methodology. Faktion’s Six-Step Framework, combined with our Evaluation-Driven Development approach, provides a repeatable blueprint to design, deploy, and continuously refine agent workflows that deliver tangible value, compliance, and adaptability at scale.
The key to sustainable success is clear: align stakeholders, empower domain experts, and embed continuous evaluation at every stage.
By embracing this disciplined yet flexible approach, your organisation can confidently navigate AI’s complexities, transform operational efficiency, and unlock the full potential of agentic systems, moving decisively from vision to real-world impact.