1. 问题
中启乘数科技是一家专业的Greenplum数据库服务商,在一个客户的环境中发现一张小表 left join 分区大表,经常不走索引。经过仔细测试,发现情况是:
- 一张小表 left join的大表必须是分区表,而且join的字段必须是varchar类型的字段是才不走索引。
- 一张小表 left join的普通大表,可以走到索引
- 一张小表 left join的大表,即使是分区表,但如果join健是 text类型或int类型,也是可以走到索引的。
2. 问题重现过程
建测试表:
create table test_big(id varchar(32),t varchar(32)) distributed by (id)partition by range(t)(partition p1 start ('0') inclusive end ('5') exclusive,partition p2 start ('5') inclusive end ('9999999999999999999') inclusive);insert into test_big select seq, seq from generate_series(1, 10000000) as seq;create index idx_test_big_id on test_big(id);create table test_small(id varchar(32),t varchar(32)) distributed by (id);insert into test_small select seq*10000, seq*10000 from generate_series(1, 100) as seq;
看执行计划:
postgres=# set Optimizer to off;SETpostgres=# set enable_nestloop to on;SETpostgres=# analyze test_big_1_prt_p1;ANALYZEpostgres=# analyze test_big_1_prt_p2;ANALYZEpostgres=# explain analyze select a.* from test_small a left join test_big b on a.id=b.id;QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------Gather Motion 2:1 (slice2; segments: 2) (cost=4.25..350948.25 rows=100 width=12) (actual time=5428.774..5430.486 rows=100 loops=1)-> Hash Right Join (cost=4.25..350948.25 rows=50 width=12) (actual time=44.883..5427.152 rows=61 loops=1)Hash Cond: ((b.id)::text = (a.id)::text)Extra Text: (seg1) Hash chain length 1.0 avg, 1 max, using 61 of 524288 buckets.-> Redistribute Motion 2:2 (slice1; segments: 2) (cost=0.00..313443.00 rows=5000000 width=7) (actual time=2.339..4486.409 rows=5000055 loops=1)Hash Key: b.id-> Append (cost=0.00..113443.00 rows=5000000 width=7) (actual time=0.499..2188.017 rows=5000055 loops=1)-> Seq Scan on test_big_1_prt_p1 b (cost=0.00..50419.45 rows=2222223 width=7) (actual time=0.498..828.350 rows=2223115loops=1)-> Seq Scan on test_big_1_prt_p2 b_1 (cost=0.00..63023.55 rows=2777778 width=7) (actual time=0.234..527.056 rows=2778615 loops=1)-> Hash (cost=3.00..3.00 rows=50 width=12) (actual time=0.119..0.119 rows=61 loops=1)-> Seq Scan on test_small a (cost=0.00..3.00 rows=50 width=12) (actual time=0.069..0.079 rows=61 loops=1)Planning time: 0.861 ms(slice0) Executor memory: 127K bytes.(slice1) Executor memory: 75K bytes avg x 2 workers, 75K bytes max (seg0).(slice2) Executor memory: 4232K bytes avg x 2 workers, 4232K bytes max (seg0). Work_mem: 3K bytes max.Memory used: 128000kBOptimizer: Postgres query optimizerExecution time: 5432.691 ms(18 rows)
发现不走索引。
但是如果是普通的 inner join是可以走索引的:
postgres=# explain analyze select a.* from test_small a join test_big b on a.id=b.id;QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------------------Gather Motion 2:1 (slice2; segments: 2) (cost=0.18..66891.00 rows=100 width=12) (actual time=84.616..99.975 rows=100 loops=1)-> Nested Loop (cost=0.18..66891.00 rows=50 width=12) (actual time=43.385..94.197 rows=61 loops=1)-> Broadcast Motion 2:2 (slice1; segments: 2) (cost=0.00..6.00 rows=100 width=12) (actual time=1.052..1.165 rows=100 loops=1)-> Seq Scan on test_small a (cost=0.00..3.00 rows=50 width=12) (actual time=0.055..0.065 rows=61 loops=1)-> Append (cost=0.18..334.40 rows=1 width=7) (actual time=0.106..0.937 rows=1 loops=100)-> Index Only Scan using test_big_1_prt_p1_id_idx on test_big_1_prt_p1 b (cost=0.18..167.20 rows=1 width=7) (actual time=0.049..0.587 rows=0 loops=100)Index Cond: (id = (a.id)::text)Heap Fetches: 0-> Index Only Scan using test_big_1_prt_p2_id_idx on test_big_1_prt_p2 b_1 (cost=0.18..167.20 rows=1 width=7) (actual time=0.056..0.345 rows=0 loops=100)Index Cond: (id = (a.id)::text)Heap Fetches: 0Planning time: 1.046 ms(slice0) Executor memory: 200K bytes.(slice1) Executor memory: 58K bytes avg x 2 workers, 58K bytes max (seg0).(slice2) Executor memory: 280K bytes avg x 2 workers, 280K bytes max (seg0).Memory used: 128000kBOptimizer: Postgres query optimizerExecution time: 102.337 ms(18 rows)
如果直接join分区,是可以直接走索引的:
postgres=# explain analyze select a.* from test_small a left join test_big_1_prt_p1 b on a.id=b.id;QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------Gather Motion 2:1 (slice1; segments: 2) (cost=0.18..16724.00 rows=100 width=12) (actual time=4.751..5.025 rows=100 loops=1)-> Nested Loop Left Join (cost=0.18..16724.00 rows=50 width=12) (actual time=0.250..2.501 rows=61 loops=1)-> Seq Scan on test_small a (cost=0.00..3.00 rows=50 width=12) (actual time=0.056..0.074 rows=61 loops=1)-> Index Only Scan using test_big_1_prt_p1_id_idx on test_big_1_prt_p1 b (cost=0.18..167.20 rows=1 width=7) (actual time=0.002..0.038 rows=0 loops=61)Index Cond: (id = (a.id)::text)Heap Fetches: 0Planning time: 0.740 ms(slice0) Executor memory: 96K bytes.(slice1) Executor memory: 164K bytes avg x 2 workers, 164K bytes max (seg0).Memory used: 128000kBOptimizer: Postgres query optimizerExecution time: 7.444 ms(12 rows)
如果join健是text类型则没有这个问题
建测试表:
create table test2_big(id text,t text) distributed by (id)partition by range(t)(partition p1 start ('0') inclusive end ('5') exclusive,partition p2 start ('5') inclusive end ('9999999999999999999') inclusive);insert into test2_big select seq, seq from generate_series(1, 10000000) as seq;create index idx_test2_big_id on test2_big(id);create table test2_small(id text,t text) distributed by (id);insert into test2_small select seq*10000, seq*10000 from generate_series(1, 100) as seq;postgres=# analyze test2_big_1_prt_p1;ANALYZEpostgres=# analyze test2_big_1_prt_p2;ANALYZE
看执行计划:
postgres=# set Optimizer to off;SETpostgres=# set enable_nestloop to on;SETpostgres=# explain analyze select a.* from test2_small a left join test2_big b on a.id=b.id;QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------Gather Motion 2:1 (slice1; segments: 2) (cost=0.18..33445.00 rows=100 width=12) (actual time=2.988..4.136 rows=100 loops=1)-> Nested Loop Left Join (cost=0.18..33445.00 rows=50 width=12) (actual time=0.113..1.983 rows=61 loops=1)-> Seq Scan on test2_small a (cost=0.00..3.00 rows=50 width=12) (actual time=0.011..0.030 rows=61 loops=1)-> Append (cost=0.18..334.40 rows=1 width=7) (actual time=0.002..0.031 rows=1 loops=61)-> Index Only Scan using test2_big_1_prt_p1_id_idx on test2_big_1_prt_p1 b (cost=0.18..167.20 rows=1 width=7) (actual time=0.000..0.013 rows=0 loops=61)Index Cond: (id = a.id)Heap Fetches: 0-> Index Only Scan using test2_big_1_prt_p2_id_idx on test2_big_1_prt_p2 b_1 (cost=0.18..167.20 rows=1 width=7) (actual time=0.001..0.015 rows=1 loops=61)Index Cond: (id = a.id)Heap Fetches: 0Planning time: 0.810 ms(slice0) Executor memory: 128K bytes.(slice1) Executor memory: 268K bytes avg x 2 workers, 268K bytes max (seg0).Memory used: 128000kBOptimizer: Postgres query optimizerExecution time: 6.018 ms(16 rows)
3. 问题解决
问题已经定位,是bug,需要修改代码解决,如有兴趣,可以邮件customer@csudata.com
本站文章,未经作者同意,请勿转载,如需转载,请邮件customer@csudata.com.
0 评论
添加一条新评论