You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
import time
from tableauhyperapi import HyperProcess, Telemetry, Connection
with HyperProcess(telemetry=Telemetry.SEND_USAGE_DATA_TO_TABLEAU) as hyper:
with Connection(endpoint=hyper.endpoint) as connection:
a=connection.execute_command("CREATE TEMPORARY EXTERNAL TABLE tripdata FOR 'd:/yellow_tripdata_2021-06.parquet'")
t=time.time()
a=connection.execute_scalar_query("""WITH RECURSIVE
cnt(x) AS (
SELECT 1
UNION ALL
SELECT x+1 FROM cnt
where x < 150000
) select count(*) from cnt;""")
print(time.time()-t, ": ", a)
exit()
can run ok
D:\>python temp.py
1.0250585079193115 : 150000
if changed the 150000 to 160000, then raise an error
D:\>python temp.py
Traceback (most recent call last):
File "temp.py", line 9, in <module>
a=connection.execute_scalar_query("""WITH RECURSIVE
File "D:\Python38\lib\site-packages\tableauhyperapi\connection.py", line 238, in execute_scalar_query
with self.execute_query(query, text_as_bytes) as result:
File "D:\Python38\lib\site-packages\tableauhyperapi\connection.py", line 191, in execute_query
Error.check(hapi.hyper_execute_query(self._cdata,
File "D:\Python38\lib\site-packages\tableauhyperapi\impl\dllutil.py", line 100, in check
raise errp.to_exception()
tableauhyperapi.hyperexception.HyperException: A segment overflowed.
Context: 0xfa6b0e2f
The text was updated successfully, but these errors were encountered:
Unfortunately, there is nothing you can do from the outside. This is not an intentional limit. Hyper is simply running out of memory here. This is something we need to fix internally in our source code. Thanks for bringing it up!
I will update this thread as soon as we shipped a fix
I just merged a fix to our main branch which should fix this. The fix should ship in the upcoming November release.
You should now be able to get around 5000x more tuples through the Iteration operator, i.e. around 750M tuples instead of 150k (ballbark estimates might be even more or slightly less).
can run ok
if changed the 150000 to 160000, then raise an error
The text was updated successfully, but these errors were encountered: