Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with enable new analyzer: "Not found column __table1.name in block" #63395

Closed
oleg-savko opened this issue May 6, 2024 · 7 comments
Closed
Labels
st-wontfix Known issue, no plans to fix it currenlty

Comments

@oleg-savko
Copy link

With enabled new analyzer( allow_experimental_analyzer = 1), queries start not work with exception DB::Exception: Not found column __table1.name

{5e42fa07-019e-42a5-909d-cd6362576834} <Error> executeQuery: Code: 10. DB::Exception: Not found column __table1.name in block. There are only columns: name, database, __table2.engine, engine: While executing Remote. (NOT_FOUND_COLUMN_IN_BLOCK) (version 24.4.1.2088 (official build)) (from ..) (in query: /* {"app": "dbt", "dbt_version": "1.7.13", "profile_name": "dbt_global", "target_name": "prod", "connection_name": "list__mm_automation"} */
 select t.name as name, t.database as schema, multiIf( engine in ('MaterializedView', 'View'), 'view', engine = 'Dictionary', 'dictionary', 'table' ) as type, db.engine as db_engine,t.engine like 'Replicated%' or t.engine = 'View' as is_on_cluster from clusterAllReplicas("default", system.tables) as t join system.databases as db on t.database = db.name where schema = 'mm_automation' group by name, schema, type, db_engine, t.engine ), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c9a449b
1. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000780b9ac
2. DB::Exception::Exception<String const&, String>(int, FormatStringHelperImpl<std::type_identity<String const&>::type, std::type_identity<String>::type>, String const&, String&&) @ 0x00000000080b544b
3. DB::Block::getByName(String const&, bool) const @ 0x000000000fbfd42e
4. DB::adaptBlockStructure(DB::Block const&, DB::Block const&) @ 0x000000000fec530d
5. DB::RemoteQueryExecutor::processPacket(DB::Packet) @ 0x000000000fec356a
6. DB::RemoteQueryExecutor::readAsync() @ 0x000000000fec4e56
7. DB::RemoteSource::tryGenerate() @ 0x000000001266264f
8. DB::ISource::work() @ 0x000000001230a2a2
9. DB::ExecutionThreadContext::executeTask() @ 0x00000000123257a8
10. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000012319a90
11. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001231b1b8
12. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false, true>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false, true>, void*>) @ 0x000000000ca5bab9
13. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false, true>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false, true>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000ca5f82a
14. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000ca5e62d
15. ? @ 0x00007fdee20e4609
16. ? @ 0x00007fdee1fff353
@oleg-savko oleg-savko added the potential bug To be reviewed by developers and confirmed/rejected. label May 6, 2024
@Algunenano
Copy link
Member

Please provide a reproducer, version used, etc. This works fine:

SELECT
    t.name AS name,
    t.database AS schema,
    multiIf(engine IN ('MaterializedView', 'View'), 'view', engine = 'Dictionary', 'dictionary', 'table') AS type,
    db.engine AS db_engine,
    (t.engine LIKE 'Replicated%') OR (t.engine = 'View') AS is_on_cluster
FROM clusterAllReplicas(test_cluster_two_shards, system.tables) AS t
INNER JOIN system.databases AS db ON t.database = db.name
WHERE schema = 'mm_automation'
GROUP BY
    name,
    schema,
    type,
    db_engine,
    t.engine

@Algunenano Algunenano added st-need-info We need extra data to continue (waiting for response) close in a month if not active This will be closed in case of no information labels May 6, 2024
@oleg-savko
Copy link
Author

Please provide a reproducer, version used, etc. This works fine:

SELECT
    t.name AS name,
    t.database AS schema,
    multiIf(engine IN ('MaterializedView', 'View'), 'view', engine = 'Dictionary', 'dictionary', 'table') AS type,
    db.engine AS db_engine,
    (t.engine LIKE 'Replicated%') OR (t.engine = 'View') AS is_on_cluster
FROM clusterAllReplicas(test_cluster_two_shards, system.tables) AS t
INNER JOIN system.databases AS db ON t.database = db.name
WHERE schema = 'mm_automation'
GROUP BY
    name,
    schema,
    type,
    db_engine,
    t.engine

Strange behaviour:

  • if settings allow_experimental_analyzer explicitly set in users.xml to 1 or via set query its work.

select * from system.settings where name = 'allow_experimental_analyzer';

name,value,changed
allow_experimental_analyzer,1,1

but if its not changed and default config, its throw error
select * from system.settings where name = 'allow_experimental_analyzer';

name,value,changed
allow_experimental_analyzer,1,0

@Algunenano
Copy link
Member

Are you using a cluster where the replicas have different CH versions and different settings for allow_experimental_analyzer? If it's not set, then it must be the same in all replicas

@oleg-savko
Copy link
Author

Same ch version, and yes not work in this case:
image

@oleg-savko
Copy link
Author

with same default config in both servers its work.

Seems problem only when its not the same on all servers.

@Algunenano
Copy link
Member

You must setup the same config in both servers or explicitly enable the setting (via config or query) in the initiator.

@Algunenano Algunenano closed this as not planned Won't fix, can't repro, duplicate, stale May 6, 2024
@Algunenano Algunenano added st-wontfix Known issue, no plans to fix it currenlty and removed st-need-info We need extra data to continue (waiting for response) close in a month if not active This will be closed in case of no information labels May 6, 2024
@oleg-savko
Copy link
Author

You must setup the same config in both servers or explicitly enable the setting (via config or query) in the initiator.

Its okey, but maybe should add more friendly error, for this case. Its not obvious at all, and its can be easy to forgot to change and sync it on all servers in first time.

@novikd novikd removed the potential bug To be reviewed by developers and confirmed/rejected. label May 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
st-wontfix Known issue, no plans to fix it currenlty
Projects
None yet
Development

No branches or pull requests

3 participants