A production-ready, real-time chat application architected for scalability and performance.
Syncro Talk is a modern chat platform built with Django Channels and Vue.js, featuring real-time messaging, AI-powered conversation summaries, and a premium "SaaS-like" user interface.
The system is designed as a distributed application using microservices principles (via Docker). It leverages Redis as a central message backbone for both WebSocket broadcasting and Asynchronous Task processing.
graph TD
Client[Web Client (Vue.js)]
subgraph "Application Cluster (Docker)"
Daphne[Daphne (ASGI Server)]
Django[Django Backend]
Worker[Celery Worker]
Redis[(Redis Message Broker)]
DB[(PostgreSQL)]
end
External[OpenAI API]
%% WebSocket Flow
Client <-->|WebSocket/WSS| Daphne
Daphne <-->|ASGI| Django
Django <-->|Channel Layer| Redis
%% API Flow
Client <-->|HTTP REST| Django
Django <-->|ORM| DB
%% Async Task Flow
Django -.->|Dispatch Task| Redis
Redis -.->|Consume Task| Worker
Worker <-->|Generate Summary| External
Worker -.->|Push Result via WS| Redis
- Daphne & Django Channels: Handles the WebSocket connections. Unlike standard WSGI servers, it maintains persistent connections to clients.
- Redis Channel Layer: The "glue" that allows the system to scale. It broadcasts messages between different Django processes. If we added 10 more server instances, they would all communicate via this Redis instance.
- Celery & Async Tasks: Heavy operations (like AI summarization) are offloaded to Celery workers. This ensures the main server threads never block, keeping the UI snappy.
- PostgreSQL: Robust relational database for persistent storage (Users, Messages, Channels).
- Real-time Messaging: Instant delivery via WebSockets.
- AI Summarization: Integrates with OpenAI to summarize conversation history asynchronously.
- Scalable Architecture: Configured with Redis and Docker to run in distributed environments.
- Type-Safe Core: Backend protected by Mypy static analysis and Ruff linting.
- Robust Testing: Comprehensive Pytest suite covering Views, Models, and WebSocket Consumers.
- Backend: Python 3.11, Django 5.2, Django REST Framework, Django Channels.
- Frontend: Vue.js 3, Vanilla CSS (Glassmorphism design).
- Infra: Docker, Docker Compose, Redis, PostgreSQL.
- Quality: Mypy, Ruff, Pytest.
The project is fully containerized. To run the full stack:
-
Clone the repository:
git clone https://github.com/thefernandopaes/syncro-talk.git cd syncro-talk -
Set up Environment:
cp .env.example .env # Add your OpenAI API Key in .env if you want AI features -
Run with Docker:
docker-compose up --build
Access the application at http://localhost:8000.
If running locally (without Docker):
# Run Tests (Autodetects local/sqlite mode)
pytest
# Type Check
mypy .
# Linting
ruff check .