1. Start simple and add complexity gradually
The most common mistake when building assistants is trying to do too much at once. Start with a narrow, well-defined use case, then expand.
Progressive approach:
Build a basic assistant that handles one core task well
Add related functionality once the core is working as expected
Expand to handle variations and edge cases
Integrate with additional tools and datasources
This is especially important for assistants that will use multiple integrations or complex workflows.
2. Be specific and explicit
Vague instructions lead to unpredictable results. The more specific your rules, the more consistent your assistant's behavior.
Instead of: "Help users with their questions."
Write this: "When a user asks a question, first determine if it's about billing, technical support, or general information. For billing questions, search the billing database and provide the specific account details. For technical questions, reference the technical documentation datasource. For general questions, provide a concise answer and offer to connect them with a specialist."
Key principle: Define expected inputs and outputs, specify formatting requirements, include examples when helpful, and address edge cases explicitly.
3. Document your assistant's capabilities
Include clear information about what your assistant can and cannot do—both in the assistant's description and in its rules.
In the description field: Explain the assistant's purpose, primary use cases, and any prerequisites users should know about.
In the rules: Consider adding a rule that helps the assistant explain its own capabilities when asked. For example: "When a user asks what you can do, explain that you can [list specific capabilities] and that you cannot [list limitations]."
Using Actions
4. Enable only the actions your assistant needs
When configuring integration actions, resist the temptation to enable everything available. Only enable the specific actions your assistant will actually use.
Why this matters:
Improved focus: With fewer actions available, the assistant can better determine when and how to use each one
Better security: Following the principle of least privilege means your assistant only has access to what it truly needs
Easier troubleshooting: Fewer variables to investigate when something goes wrong
Clearer purpose: A focused set of actions makes it obvious what your assistant is designed to do
How to do this: In your assistant's Actions tab, expand each integration and toggle on only the specific actions needed. For example, if your assistant only needs to read Slack messages, don't enable message posting, channel creation, or user management actions.
5. Build incrementally when using multiple actions
When creating an assistant that uses multiple actions—especially from different integrations—build and test your rules piece by piece rather than trying to configure everything at once.
Why this matters:
Easier debugging: You know exactly what caused any issues that arise
Better understanding: You'll learn how each action behaves before combining them
Reduced complexity: Complex workflows are easier to build when you start simple
Higher success rate: You're more likely to create a working assistant
Recommended approach:
Start with Action A: Write rules, test thoroughly, and refine until it works reliably
Add Action B: Integrate the second action while keeping Action A's functionality intact
Test the interaction: Ensure both actions work together as expected
Continue incrementally: Add additional actions one at a time, testing after each addition
For example, if you're building an assistant that searches for customer data in Salesforce and then sends a summary via Slack, first get the Salesforce search working perfectly. Only after that's solid should you add the Slack messaging capability.
Organizing Information
6. Use rules for behavior, not facts
Rules in elvex are designed to guide how your assistant responds, not to store information it needs to reference.
What belongs in rules:
Tone and style requirements ("Always respond in a professional, friendly tone")
Output formatting instructions ("Format all dates as MM/DD/YYYY")
Behavioral constraints ("Never make assumptions about user intent—ask clarifying questions")
Process guidelines ("Always confirm before taking destructive actions")
What doesn't belong in rules:
Product specifications, company policies, pricing information, or technical documentation
For factual information, use datasources instead. Datasources are more efficient, easier to update, and can be shared across multiple assistants.
7. Separate concerns with context vs. rules
Understanding the difference between rules and context helps you organize your assistant's instructions effectively.
Use rules for:
How the assistant should behave
What it should and shouldn't do
Response formatting and structure
Use context for:
Company-specific terminology
Internal acronyms and jargon
Background information the assistant needs to understand your domain
Product names and basic facts about your organization
This separation makes your assistant easier to maintain and update.
8. Use datasources for knowledge, not instructions
If your assistant needs to reference information to do its job, connect that information as a datasource rather than pasting it into your rules.
Benefits of using datasources:
Efficiency: Datasources don't consume context space in every request
Maintainability: Update information in one place rather than editing rules
Shareability: Multiple assistants can reference the same datasource
Scalability: Datasources can handle large amounts of information that would overwhelm the rules field
When to use datasources: Product catalogs, policy documents, technical specifications, FAQs and knowledge bases, historical data.
Testing and Refinement
9. Test with realistic scenarios
Don't just test your assistant with ideal inputs. Use the actual questions and requests your team will send.
Effective testing approach:
Collect real examples: Gather actual messages or requests from your team
Test edge cases: Try incomplete requests, ambiguous phrasing, and unusual inputs
Refine based on results: Update your rules to handle the scenarios that didn't work well
Iterate continuously: As you discover new edge cases in real use, update your rules
The testing interface in elvex makes this easy—use it frequently during development.
10. Review and refine regularly
Your assistant's rules aren't set in stone. Plan to review and update them based on real-world usage.
Establish a review cadence:
Weekly for new assistants in active development
Monthly for established assistants with regular use
Quarterly for stable assistants with occasional use
What to review: Conversation logs to identify common issues, user feedback about assistant performance, changes in your processes or tools that might affect the assistant, and new actions or integrations that could improve functionality.
The best assistants evolve based on how people actually use them.
Putting it all together
Building effective assistants in elvex is an iterative process. Start with clear, specific rules focused on a single use case. Enable only the actions you need. Test thoroughly with realistic scenarios. And refine based on real-world usage.
Remember: the goal isn't to build the perfect assistant on the first try. The goal is to build something useful quickly, then improve it over time based on feedback and experience.
By following these recommendations, you'll create assistants that are reliable, maintainable, and genuinely helpful to your team.
