Skip to content

thefernandopaes/syncro-talk

Repository files navigation

Syncro Talk

A production-ready, real-time chat application architected for scalability and performance.

Syncro Talk is a modern chat platform built with Django Channels and Vue.js, featuring real-time messaging, AI-powered conversation summaries, and a premium "SaaS-like" user interface.

System Architecture

The system is designed as a distributed application using microservices principles (via Docker). It leverages Redis as a central message backbone for both WebSocket broadcasting and Asynchronous Task processing.

graph TD
    Client[Web Client (Vue.js)]
    
    subgraph "Application Cluster (Docker)"
        Daphne[Daphne (ASGI Server)]
        Django[Django Backend]
        Worker[Celery Worker]
        Redis[(Redis Message Broker)]
        DB[(PostgreSQL)]
    end
    
    External[OpenAI API]

    %% WebSocket Flow
    Client <-->|WebSocket/WSS| Daphne
    Daphne <-->|ASGI| Django
    Django <-->|Channel Layer| Redis
    
    %% API Flow
    Client <-->|HTTP REST| Django
    Django <-->|ORM| DB
    
    %% Async Task Flow
    Django -.->|Dispatch Task| Redis
    Redis -.->|Consume Task| Worker
    Worker <-->|Generate Summary| External
    Worker -.->|Push Result via WS| Redis
Loading

Key Components

  1. Daphne & Django Channels: Handles the WebSocket connections. Unlike standard WSGI servers, it maintains persistent connections to clients.
  2. Redis Channel Layer: The "glue" that allows the system to scale. It broadcasts messages between different Django processes. If we added 10 more server instances, they would all communicate via this Redis instance.
  3. Celery & Async Tasks: Heavy operations (like AI summarization) are offloaded to Celery workers. This ensures the main server threads never block, keeping the UI snappy.
  4. PostgreSQL: Robust relational database for persistent storage (Users, Messages, Channels).

Key Features

  • Real-time Messaging: Instant delivery via WebSockets.
  • AI Summarization: Integrates with OpenAI to summarize conversation history asynchronously.
  • Scalable Architecture: Configured with Redis and Docker to run in distributed environments.
  • Type-Safe Core: Backend protected by Mypy static analysis and Ruff linting.
  • Robust Testing: Comprehensive Pytest suite covering Views, Models, and WebSocket Consumers.

Tech Stack

  • Backend: Python 3.11, Django 5.2, Django REST Framework, Django Channels.
  • Frontend: Vue.js 3, Vanilla CSS (Glassmorphism design).
  • Infra: Docker, Docker Compose, Redis, PostgreSQL.
  • Quality: Mypy, Ruff, Pytest.

Quick Start

The project is fully containerized. To run the full stack:

  1. Clone the repository:

    git clone https://github.com/thefernandopaes/syncro-talk.git
    cd syncro-talk
  2. Set up Environment:

    cp .env.example .env
    # Add your OpenAI API Key in .env if you want AI features
  3. Run with Docker:

    docker-compose up --build

Access the application at http://localhost:8000.


Development Commands

If running locally (without Docker):

# Run Tests (Autodetects local/sqlite mode)
pytest

# Type Check
mypy .

# Linting
ruff check .

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors