As soon as new models are released, the Convocore AI team promptly updates
the platform. This means you typically get access to the latest and most
powerful models right away.
Model Capabilities
Having an understanding of the various models, their strengths and potential weaknesses allows you to leverage the right model for your specific use case, ensuring your agent is equipped to handle its task. These are the main points to consider when choosing:Function Calling
Models with tools support enable advanced interactions
through function calling, allowing for communications with external APIs.
This capability is crucial for tasks requiring specific data formats or
integrations with external systems.
Groq Acceleration
Groq-powered models leverage cutting-edge hardware for ultra-fast, ideal for real-time applications. This technology significantly reduces
latency, making these models perfect for scenarios where quick response
times are critical.
Extended Context
Certain models offer larger, allowing them to process and understand longer inputs. This is
particularly useful for tasks involving extensive documents or complex,
multi-turn conversations.
Task Specialization
Different models excel in various specialized tasks, such as code
generation, creative writing, or analytical reasoning. Read our prompt
engingeering guide for more
information on creating task specific agents.
Available Models
OpenAI Models
OpenAI Models
Read more about the GPT models here.
- GPT-4o (with tools)
- GPT-4o-mini (with tools)
- GPT-4-32k (with tools)
- GPT-4 (with tools)
- GPT-3.5-turbo-16k
- GPT-3.5-turbo
Anthropic Models
Anthropic Models
Read more about the Claude models here.
- Claude-3-5-sonnet-20240620 (with tools)
- Claude-3-opus-20240229
- Claude-3-sonnet-20240229
- Claude-3-haiku-20240307
Google Models
Google Models
Read more about the Google deepmind models here.
- Gemini-1.5-pro (with tools)
- Gemini-1.5-flash
- Gemini-1.0-pro
Groq-Powered Models
Groq-Powered Models
Read more about the models hosted on Groq here.
- LLaMA-3.1-70b-versatile (with tools)
- LLaMA-3.1-8b-instant
- LLaMA3-70b-8192
- LLaMA3-8b-8192
- Gemma2-9b-it
- Gemma-7b-it
- Mixtral-8x7b-32768
Choosing the Right Model
Selecting the appropriate model for your project depends on various factors:Task Complexity
For intricate tasks, consider
GPT-4o
, Claude-3-opus-20240229
and
Claude-3-5-sonnet-20240620
, or Gemini-1.5-pro
models.Response Speed
Groq-powered models, especially
LLaMA-3.1-8b-instant
and Gemma-7b-it
,
excel in scenarios requiring rapid responses.Context Length
Models like
GPT-4-0 (128k)
, Google Gemini 1.5 Pro (2 Million)
,
Claude-3-5-sonnet-20240620 (200k)
, and LLaMA3-70b-8192 (128k)
offer
extended context for handling longer inputs.Tool Integration
Choose models with tools support for advanced function
calling capabilities, such as
GPT-4o
, GPT-4o-mini
, GPT-4-32k
, GPT-4
,
Claude-3-5-sonnet-20240620
, Gemini-1.5-pro
, and
LLaMA-3.1-70b-versatile
.Resource Efficiency
Smaller models like
GPT-3.5-turbo
, GPT-4-o-mini
,
Claude-3-haiku-20240307
, Gemini-1.5-flash
, and LLaMA-3.1-8b-instant
can be more cost-effective and faster for simpler tasks.Experiment with different models to find the best balance between writing
styles, capabilies and efficiency for your agents use case.