You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Out of memory: Killed process 3007584 (trino-server) total-vm:181122636kB, anon-rss:116305736kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:253108kB oom_score_adj:0
Table definition
CREATE TABLE ice.v.t (
c_id bigint NOT NULL,
timestamp timestamp(6) with time zone NOT NULL,
s_m varchar,
s_i varchar,
s_im varchar,
s_mc integer,
s_mn integer,
lc integer,
cl bigint,
tc integer,
ci bigint,
t_s integer,
sw_id varchar NOT NULL,
o_l integer
)
WITH (
format = 'ORC',
format_version = 2,
location = 's3a://trino/v/t-b11111111111111a00c88de3509bb9c',
orc_bloom_filter_columns = ARRAY['s_m','s_i','s_im'],
orc_bloom_filter_fpp = 5E-2,
partitioning = ARRAY['day(timestamp)'],
sorted_by = ARRAY['timestamp ASC NULLS FIRST']
)
The text was updated successfully, but these errors were encountered:
I reproduced with detailed nodes monitoring. It seems RSS leak is related to peak of availability errors (minio_node_drive_errors_availability) from underlying s3 (minio) storage.
After some hours of running query (approx. 1761 files, 115 GB total).
Trino-server process on coordinator eat all available memory and was killed by OOM.
Here strange memory consuption while processing request:
Coordinator's mem
Other nodes
Questions:
Trino 468: 12 nodes, 1 coordinator (standalone) -- 125 GB RAM each node.
config.props
Coordinator
Workers
JVM on all nodes
OOM killer report
Table definition
The text was updated successfully, but these errors were encountered: