C4 Architecture
Detailed C4 model diagrams for system architecture
C4 Architecture Model
The C4 model provides a hierarchical way to visualize the SynapseAI B2B Wholesale Platform architecture at different levels of abstraction. These diagrams use FlavorFlux as the example brand.
System Context Diagram
The highest level view showing how the system fits into the world.
Key Actors
B2B Admin (Emily):
- Creates and manages promotional campaigns
- Analyzes sales and promotion performance
- Configures product bundles and offers
B2B Retail Buyer (Daniel):
- Orders products in bulk
- Manages retail store inventory
- Tracks order history and forecasts
Store Representative (Sarah):
- Manages individual store inventory
- Handles customer interactions
- Places restock orders
External Systems
Composable Core:
- Commercetools: Product catalog, cart, and order management
- Voucherify: Dynamic promotion and loyalty engine
AWS AI Services:
- Amazon Bedrock: Foundation models for campaign generation
- Amazon Forecast: Time-series forecasting for demand prediction
Container Diagram
Shows the high-level technology choices and how containers communicate.
Container Descriptions
Web UI (Vite/React):
- Single-page application for all user interactions
- Real-time chat interface
- Admin dashboard for campaign management
- Built with TanStack Router and Shadcn/ui
API Gateway (FastAPI):
- RESTful API endpoints
- Authentication and authorization
- Request routing and validation
- CORS configuration
LangGraph Orchestration:
- Stateful conversational AI
- Intent classification and routing
- Multi-turn dialogue management
- Product disambiguation handling
Backend Services:
- Business logic layer
- Product catalog management
- Campaign orchestration
- Analytics and reporting
MCP Clients:
- Standardized integration layer
- Connection pooling
- Error handling and retries
- Response caching
Session Store (Redis):
- Conversation history
- User sessions
- Cart state
- Product catalog cache
LLM Router:
- Intelligent request routing
- Cost optimization
- Performance balancing
- Fallback mechanisms
LiteLLM Client:
- Multi-provider LLM interface
- Intent classification
- Response generation
- Text parsing and extraction
Component Diagram - LangGraph Orchestration
Detailed view of the LangGraph conversational AI components.
Component Responsibilities
Intent & Persona Routing:
- Classifies user messages into intents
- Extracts entities (products, quantities, actions)
- Routes to persona-specific workflows
Cart & Order Management:
- Manages shopping cart state
- Handles product resolution and disambiguation
- Processes order placement and confirmation
Planning & Execution:
- Builds execution plans for complex operations
- Simulates pricing and promotions
- Executes orders via Commercetools
Forecasting & Inventory:
- Predicts demand using AWS Forecast
- Suggests optimal reorder quantities
- Generates stock alerts
Deployment Diagram
Shows how the system is deployed to infrastructure.
Deployment Architecture
Vercel (Frontend):
- Global CDN for fast content delivery
- Automatic HTTPS and SSL
- Zero-configuration deployment
AWS ECS Fargate (Backend):
- Serverless container orchestration
- Auto-scaling based on CPU/memory
- No server management required
Application Load Balancer:
- SSL termination
- Path-based routing
- Health checks
ElastiCache Redis:
- Serverless Redis cluster
- Automatic scaling
- Daily backups
Key Architecture Decisions
1. Dual LLM Approach
Decision: Use both LiteLLM and AWS AI services
Rationale:
- Cost optimization (cheaper models for simple tasks)
- Performance balancing (fast inference for common queries)
- Specialized capabilities (AWS Forecast for demand prediction)
Consequences:
- Need intelligent routing logic
- Require fallback mechanisms
- More complex monitoring
2. LangGraph for Orchestration
Decision: Use LangGraph for conversational AI
Rationale:
- Stateful multi-actor applications
- Complex routing and branching
- Built-in memory management
- Python ecosystem integration
Consequences:
- 11+ specialized nodes required
- State management complexity
- Learning curve for developers
3. MCP Client Pattern
Decision: Standardize external integrations via MCP
Rationale:
- Consistent error handling
- Connection pooling
- Response caching
- Easy to swap providers
Consequences:
- Abstraction layer overhead
- MCP server deployment required
- Additional network hops
4. Composable Commerce
Decision: Use best-of-breed SaaS services
Rationale:
- Faster time to market
- Lower maintenance burden
- Scalability out of the box
- Focus on core differentiators
Consequences:
- Vendor dependency
- Integration complexity
- Potential cost at scale
5. AWS for Infrastructure
Decision: Deploy on AWS using ECS Fargate
Rationale:
- Serverless container orchestration
- Integrated AI services
- Global availability
- Enterprise-grade security
Consequences:
- Cloud vendor lock-in
- Cost management required
- AWS expertise needed
Next Steps
- Architecture Overview - High-level architecture
- Backend Setup - Set up the backend
- Frontend Setup - Set up the frontend
- Deployment - Deploy to AWS
- User Scenarios - Understand user workflows