Voedger Internals
  • Introduction
  • 💡Concepts
    • Event Sourcing & CQRS
    • Editions (deprecated)
      • Community Edition
      • Standart Edition
      • Standart Edition (v1)
  • 🚀Server
    • Overview (Server)
    • Design
      • Query Processor
      • API Gateway implementation
      • N1 Cluster
      • N3 Cluster
      • N5 Cluster
      • Orchestration
      • Sequences
      • Packages overview
        • sys
        • registry
    • Features
      • API Gateway
        • API v2
          • Conventions
            • API URL
            • HTTP methods and processors
            • Naming conventions
            • Query constraints
            • Error handling
          • Documents and records
            • Create document or record
            • Update document or record
            • Deactivate document or record
            • Read document or record
            • Read from CDoc collection
          • Queries
            • Read from query
          • Views
            • Read from view
          • Commands
            • Execute command
          • BLOBs
            • Create BLOB
            • Read BLOB
          • Temporary BLOBs
            • Create temporary BLOB
            • Read temporary BLOB
          • Schemas
            • List app workspaces
            • List workspace roles
            • Read workspace role schema
        • API v1
          • API Conventions
          • BLOBs
      • Admin Endpoint
      • Clusters
        • Bootstrap
        • Monitoring
        • Secure prometheus and grafana
        • Alerting
        • Maintenance
          • SELECT, UPDATE
      • VVMs
      • Applications
        • Deploy Application
        • Sidecar Applications
      • AuthNZ
        • Issue Principal Token
        • Refresh Principal Token
        • Enrich Principal Token
        • ACL Rules
        • Global Roles
      • Data types
        • Core types
        • Small integers
        • Uniques With Multiple Fields
        • Verifiable Fields
      • Workspaces
        • Create Workspace
        • Deactivate Workspace
        • See also (Workspaces)
      • Invites
        • Invite to Workspace
        • Join Workspace
        • Leave Workspace
        • Cancel sent Invite
        • Cancel accepted Invite
        • Update Invite roles
      • Users
        • Create a new user
        • Change user password
        • Send Email
        • Reset password
        • Change Email
      • Notifications
        • Heartbeats
      • Devices
        • Create a new device
        • Join device to workspace
      • Jobs
      • DMBS Drivers
        • AmazonDB Driver
      • Frozen
        • Ephemeral Storage
        • Storage Extensions
  • 🛠️Framework
    • Overview (Framework)
    • Features
      • vpm
      • vpm init
      • vpm tidy
      • vpm baseline
      • vpm orm
      • vpm build
      • API for testing
  • Development
    • Requirements Management
    • Requirements Management (Overview)
Powered by GitBook
On this page
  • Motivation
  • Server
  • Nodes
  • Swarm stacks
  • Alternative configurations
  • Server: N3: Load Balancer
  • Server: N3: Swarm
  • clusters.n3.monitoring

Was this helpful?

Edit on GitHub
  1. 🚀Server
  2. Design

N3 Cluster

PreviousN1 ClusterNextN5 Cluster

Last updated 3 months ago

Was this helpful?

feat~srv.clusters.n3~1

N3 is a 3-node cluster.

Needs: adsn, story

Motivation

The need for new Standard Edition architecture arose from challenges encountered in the :

  • implementation revealed load distribution issues:

    • In a 5-node setup with one application server and 3 database servers, one application node remained idle

    • In a 3-node setup with one application server and 3 database servers, uneven load distribution led to potential node overload

  • The initiative to was unsuccessful

Server

N3 Cluster implements:

  • A 3-node peer-to-peer cluster

  • 3 routers

  • 6 VVMs (fixed configuration, not modifiable currently)

  • Clean Ubuntu nodes as a requirement

    • This decision aims to minimize software conflicts and reduce operational costs

  • grafana 8.3.4 becasue of reverse proxy problem

Nodes

Swarm stacks

  • grafana always work with local prometheus task (instance that runs on the same node)

    • Each host has its own grafana configuration

  • router is one service three instances

    • Three services configuration leads to port conflicts.

  • Networks

    • "voedger.net" 10.10.0.0

    • Scylla: host mode network

Alternative configurations

Server: N3: Load Balancer

adsn~srv.clusters.n3.load-balancer~1

Covers:

  • feat~srv.clusters.n3~1

Server: N3: Swarm

adsn~srv.clusters.n3.swarm~1

The system uses Docker Swarm for orchestration.

Covers:

  • feat~srv.clusters.n3~1

Needs: impl

Server: N3: Swarm: All Managers

adsn~srv.clusters.n3.swarm.allmgrs~1

All nodes function as managers.

Covers:

  • feat~srv.clusters.n3~1

Needs: impl

clusters.n3.monitoring

  • Monitoring:

    • 3 Prometheus instances

    • 3 Grafana instances

  • Database:

    • DBMS: Scylla

    • Scylla cluster configuration:

      • Physical deployment: One or three datacenters

      • Logical configuration: ??? Always maintains two datacenters (Scylla configuration)

  • Routing implementation:

    • Router task runs on each node

    • Current solution uses Voedger image with specific CLI options for routing

    • Note: The possibility of using Nginx is being considered, but the implementation has been postponed due to the complexity of development and maintenance

Covers:

  • feat~srv.clusters.n3~1

The system’s load balancing layer must be provided by a cloud-managed load balancer solution, such as Amazon Elastic Load Balancer, Google Cloud Load Balancer, or .

previous architecture
Peer-to-peer cluster
Design "peer nodes" ctool principles
Hetzner Load Balancer