Available AI Models
Enginy offers a curated selection of AI models from leading providers. Each model has different strengths, and choosing the right one can make a significant difference in output quality and cost. Models marked as Recommended in the platform are the best starting point for most use cases.
Model | Provider | Best For | Key Strengths |
GPT 5.4 | OpenAI | Complex reasoning and professional tasks | Flagship model with instant and thinking modes. Excels at multi-step execution, financial modeling, and detailed analyses. Ideal for power users and enterprise workflows. |
GPT 5 mini | OpenAI | Balanced everyday tasks | Strong balance of capability and speed. Well-suited for enrichment, summarization, and classification. |
GPT 5 nano | OpenAI | High-volume, cost-sensitive tasks | Most lightweight and cost-effective in the GPT-5 family. Best for large-scale list runs where per-row cost matters. |
Gemini 3 Pro | Top-tier reasoning and accuracy | Sparse mixture-of-experts architecture with up to 64K token output. Excellent for complex prompts and high-accuracy tasks. | |
Gemini 3 Flash | Speed-first scenarios | Near-instant responses with solid quality. Best for sorting, normalizing data, or tasks where speed matters more than depth. | |
Grok 4 | xAI | Complex reasoning and analysis | "Think before responding" approach improves accuracy for chained reasoning and multi-step problem-solving. |
Claude 4.5 Sonnet | Anthropic | Nuanced writing and coding | High-quality, natural-sounding text generation. Follows complex stylistic instructions faithfully. Ideal for outreach copy. |
Note: Models marked Recommended in the Enginy interface (currently GPT 5.4 and Grok 4.20) are the best general-purpose options for most AI Variable and AI Campaign use cases. Available models may be updated over time — check the model selector in the AI Variable or AI Campaign editor for the latest options.
Quick Selection Guide
Need | Recommended Models |
Top performance for complex reasoning | GPT 5.2, Gemini 3 Pro, Grok 4 |
Best for nuanced writing and outreach copy | Claude 4.5 Sonnet, GPT 5.2 |
Best balance of speed and quality | GPT 5 mini, Gemini 3 Flash |
Most cost-effective for high-volume runs | GPT 5 nano, Gemini 3 Flash |
Prompt Best Practices
Generative AI models only produce great results when they receive clear, well-structured instructions. The quality of the output depends directly on the quality of the prompt.
What Makes a Good Prompt
A good prompt defines a clear objective, provides enough context, specifies the AI's role, describes the desired output format, and includes constraints and quality criteria. Think of it as a briefing for a professional: the better the briefing, the better the work.
Key Principles
1. Objective first, always. Be specific about what you want to achieve.
Vague: "Tell me about customer loyalty."
Clear: "Generate a list of 10 ideas to improve customer loyalty in a B2C contact center, focused on reducing churn by 10% within 6 months."
2. Provide relevant context. Reference variables ({field_name}) to give the AI information about your leads. Include the type of company, type of customer, and any relevant constraints.
3. Define the AI's role. "Act as a senior CX consultant" produces far better results than an unspecified request.
4. Specify the output format. Tell the AI exactly how you want the result: numbered list, table, bullet points, paragraphs of a specific length, etc.
5. Include quality criteria and constraints. State the level of detail, tone, language, and what to avoid.
6. Provide examples. A positive example (what you want) and a negative example (what you do not want) significantly reduce iteration time.
7. Iterate. The first prompt will rarely be perfect. Evaluate the output, adjust the prompt, and refine.
Recommended Prompt Template
Section | What to Include |
Persona | Which role the AI represents |
Context | Background information including sender and receiver variables |
Goal | What the AI should produce |
Instructions | Personalization rules, tone, style, formatting constraints, greeting, icebreaker, value proposition, CTA |
Signature (recommended for email) | Instruct the AI not to add a signature — it is added automatically from the identity settings |
Template (optional) | The message structure with |
Examples (recommended) | Sample outputs showing what a good result looks like |
Prompt Examples: Weak vs. Improved
Writing:
Weak: "Write an email for a client."
Improved: "Act as a B2B copywriter. Write a short email (max. 150 words) for a client who has been with us for 6 months. Objective: thank them for their trust and propose a brief review meeting. Context: we are a ticketing software company. The recipient is a Head of Operations at a contact center with 80 agents. Tone: professional, direct, value-oriented. Format: suggested subject line + email body."
Analysis:
Weak: "Summarize this text."
Improved: "Act as a business analyst. Summarize the following text in a maximum of 10 clear, actionable bullets focused on recommendations to improve customer experience. Keep only principles and best practices. Professional tone aimed at managers."
Common Prompt Mistakes
Mistake | Why It Hurts |
Being too generic | "Help me with this" gives the AI nothing to work with |
Not specifying the output format | Forces extensive reformatting afterwards |
Forgetting to define the AI's role | Leads to overly generic answers |
Mixing several tasks in one prompt | Results in shallow output across all tasks |
Not reviewing critical outputs | Never use AI responses without human validation in legal, medical, financial, or regulatory contexts |
Sharing sensitive data unnecessarily | Avoid including personally identifiable information when it is not essential |
Prompt Best Practices Checklist
Before saving your prompt, verify:
[ ] Have I clearly defined the objective?
[ ] Have I provided enough context (industry, audience, situation)?
[ ] Have I assigned an appropriate role to the AI?
[ ] Have I described the task precisely?
[ ] Have I specified the output format?
[ ] Have I indicated tone, language, and level of detail?
[ ] Have I added relevant constraints and quality criteria?
[ ] Could I improve the prompt with one or two examples?
[ ] Have I removed any sensitive or personally identifiable data?
[ ] Am I prepared to iterate after seeing the first result?

