#
Phoeniqs Chat – Technical Overview
#
Overview
The Phoeniqs Chat is an enterprise-grade conversational AI platform designed for secure deployment within trusted environments. It supports browser-based, hybrid, and fully on-premises use cases and is purpose-built for internal organizational functions such as HR helpdesks, IT support, developer enablement, and compliance Q&A.
The platform provides secure access to a curated set of open-source large language models, hosted and executed entirely within Phoeniqscontrolled environments. Models are selected based on workload characteristics and can be accessed consistently through the platform’s conversational interface or APIs.
#
💬 Enterprise Conversational Interface (LibreChat-based)
The Phoeniqs AI Platform includes a LibreChat-based conversational interface that enables secure, intuitive interaction with Phoeniqsmanaged AI services.
Key capabilities include:
- Chat-based interaction with Phoeniqshosted conversational AI agents
- Local session history with user-controlled retention, export, and clearing
- Session-scoped document uploads (up to three PDFs) for Mini-RAG contextual grounding
- In-browser and on-prem UI deployment without reliance on external SaaS frontends
- Consistent interface access across supported LLMs, independent of model choice
- Integrated with Phoeniqs LLM offering (MaaS) - Model Selection by Use Case? More info here
The conversational interface serves as the interaction layer only; all model execution, data access, and security controls are managed by the Phoeniqs AI Platform runtime.
#
🚀 Lightweight, Flexible Deployment
Supports rapid adoption through browser-based access while enabling hybrid and fully on-prem deployments.
#
🔧 Open-Source Model Backbone
Built on open-source LLMs selected for transparency, adaptability, and performance, with support for customization and future in-session model switching.
#
🧪 Trial and Production Access
Provides trial and production tiers, including:
- Serverless endpoints for model execution
- Model Execution & Indexing Units (MEIUs), including Retrieval-Augmented Generation (RAG)
- Predictable consumption models for experimentation and scale
#
Accessing the Platform
To access the Phoeniqs AI Platform:
- Create or receive a user account
- Select an access tier: Trial or Production
- Log in to the platfo
For questions or support, contact the technical support team.