This patch does exactly this, it implements support for JSON in DDL.
Right now we use server's check for JSON validity on INSERT. We do not implement
JSON validity check during updates, it is postponed for later work.
* fix(CEJ, segfault): MCOL-6198 - segfault during crossengine join
The patch moves joiners' initialization to a place after all possible
allocations of smallSideRGs vector so pointer to it's data does not
change anymore. This makes crash to cease.
An appropriate test is added to bugfixes suite.
* Change to test
* Another dangling pointer
* A change to test
* A change to test
- Add SharedStorageMonitor thread to periodically verify shared storage:
* Writes a temp file to the shared location and validates MD5 from all nodes.
* Skips nodes with unstable recent heartbeats; retries once; defers decision if any node is unreachable.
* Updates a cluster-wide stateful flag (shared_storage_on) only on conclusive checks.
- New CMAPI endpoints:
* PUT /cmapi/{ver}/cluster/check-shared-storage — orchestrates cross-node checks.
* GET /cmapi/{ver}/node/check-shared-file — validates a given file’s MD5 on a node.
* PUT /cmapi/{ver}/node/stateful-config — fast path to distribute stateful config updates.
- Introduce in-memory stateful config (AppStatefulConfig) with versioned flags (term/seq) and shared_storage_on flag:
* Broadcast via helpers.broadcast_stateful_config and enhanced broadcast_new_config.
* Config PUT is now validated with Pydantic models; supports stateful-only updates and set_mode requests.
- Failover behavior:
* NodeMonitor keeps failover inactive when shared_storage_on is false or cluster size < 3.
* Rebalancing DBRoots becomes a no-op when shared storage is OFF (safety guard).
- mcl status improvements: per-node 'state' (online/offline), better timeouts and error reporting.
- Routing/wiring: add dispatcher routes for new endpoints; add ClusterModeEnum.
- Tests: cover shared-storage monitor (unreachable nodes, HB-based skipping), node manipulation with shared storage ON/OFF, and server/config flows.
- Dependencies: add pydantic; minor cleanups and logging.
due this wonderful code
DBUG_ASSERT(!comment || !comment[0] || comment[strlen(comment)-1] != '.');
DBUG_ASSERT(!comment || !comment[0] || comment[strlen(comment)-1] != ' ');
So I had to fix the comments
* feat(cmapi): add read_only param for API add node endpoint
* style(cmapi): fixes for string length and quotes
Add dbroots of other nodes to the read-only node
On every node change adjust dbroots in the read-only nodes
Fix logging (trace level) in tests
Remove ExeMgr from constants
Fix tests
Manually remove read-only node from ReadOnlyNodes on node removal (because nodes are only deactivated)
Review fixes (mostly switching to StrEnum analog before py3.11, also changes in ruff config)
Read-only nodes are now called read replica consistently
Don't write hostname into IP fields of the config like PMSx/IPAddr, pmx_WriteEngineServer/IPAddr
We calculate ReadReplicas by finding PMs without WriteEngineServer
In _replace_localhost, replace local IP addrs with resolved IP addrs and local hostnames -- with the resolved hostnames.
ModuleHostName/ModuleIPAddr is kept intact.
Keep only IPv4 in ActiveNodes/DesiredNodes/InactiveNodes
feat: add mock DNS resolution builder for testing hostname/IP mappings
* Fix _add_node_to_PMS: if node is already in PMS, save it to existing items to not miss it during the reconstruction of the list
* Make tests independent from CWD
Fixed for _add_Module_entries
Fixed node removal and tests
Fixes for node manipulation tests
This patch changes logic from counting all nodes to counting only
read-write nodes when messaging about DML operations.
feat(MCOL-6082): Multiple readers of dbroots using OamCache logic
This patch introduces centralized logic of selecting what dbroot is
accessible in PrimProc on what node. The logic is in OamCache for time
being and can be moved later.
Fix build
* feat(optimizer): MCOL-5250 rewrite queries with DISTINCT
... as aggregated queries.
So query
```
SELECT DISTINCT <cols list>
FROM <from list>
WHERE <where clause>
HAVING <having clause>
ORDER BY <orderby list>
LIMIT <limit>
```
will become
```
SELECT *
FROM
(
SELECT <cols list>
FROM <from list>
WHERE <where clause>
HAVING <having clause>
) a
GROUP BY 1,2,3,...,N
ORDER BY <orderby list>
LIMIT limit
```
* move ORDER BY to the outer query
* fix test
* reuse cloneWORecursiveSelects() in clone()
* fix subselect columns processing