You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have traced down a bug in snowflake (likely on the backend for jdbc driver) which happens when sending java.lang.Double types into snowflake columns that are defined as VARCHAR columns. Json sent via jdbc driver to snowflake cloud contains the precision correctly but it is not stored correctly to snowflake
To clean up and re-prioritize more pressing bugs and feature requests we are closing all issues older than 6 months as of March 1, 2023. If there are any issues or feature requests that you would like us to address, please create them according to the new templates we have created. For urgent issues, opening a support case with this link Snowflake Community is the fastest way to get a response.
We have traced down a bug in snowflake (likely on the backend for jdbc driver) which happens when sending java.lang.Double types into snowflake columns that are defined as VARCHAR columns. Json sent via jdbc driver to snowflake cloud contains the precision correctly but it is not stored correctly to snowflake
REPRODUCER:
STEP #1 - create varchar table
Create table OB_OPNTOTALMFGCOST (
Period1 varchar(16777216)
)
STEP #2 - insert java Double value with maximum precision
Double value = Double.valueOf("0.08072205302271099");
... use JDBC template to insert...
STEP #3 - select from snowflake:
select * from OB_OPNTOTALMFGCOST;
==> 0.08072205302
Precision is lost :(
Workaround: send it as string - but type autoconversion should work fine...
The text was updated successfully, but these errors were encountered: