Skip to content
Permalink
master
Switch branches/tags
Go to file
 
 
Cannot retrieve contributors at this time

DolphinDB Release Notes

DolphinDB Server

Version: 1.00.0 Release date: 2019-12-02

Linux64 binary | Windows64 binary |

Version: 1.00.1 Release date: 2019.12.11

Linux64 binary | Windows64 binary |

Version: 1.00.2 Release date: 2019.12.16

Linux64 binary | Windows64 binary |

Version: 1.00.3 Release date: 2019.12.18

Linux64 binary | Windows64 binary |

Version: 1.00.4 Release date: 2019.12.20

Linux64 binary | Windows64 binary |

Version: 1.00.5 Release date: 2019.12.23

Linux64 binary | Windows64 binary |

Version: 1.00.6 Release date: 2020.01.06

Linux64 binary | Windows64 binary |

Version: 1.00.7 Release date: 2020.01.17

Linux64 binary | Windows64 binary |

Version: 1.00.8 Release date: 2020.01.19

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.9 Release date: 2020.01.30

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.10 Release date: 2020.02.15

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.11 Release date: 2020.02.28

Linux64 binary | Windows64 binary |

Version: 1.00.12 Release date: 2020.03.05

Linux64 binary | Windows64 binary |

Version: 1.00.13 Release date: 2020.03.15

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.14 Release date: 2020.03.24

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.15 Release date: 2020.04.08

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.16 Release date: 2020.04.14

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.17 Release date: 2020.04.24

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.18 Release date: 2020.05.23

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.19 Release date: 2020.06.05

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.20 Release date: 2020.06.15

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.21 Release date: 2020.06.22

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.22 Release date: 2020.07.02

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

Version: 1.00.23 Release date: 2020.07.20

Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |

New feature

  • Support high-availability of streaming based on Raft protocol.

  • New functions:

    • Temporal functions: dayOfYear,dayOfMonth, quarterOfYear,monthOfYear,weekOfYear,hourOfDay,minuteOfHour,secondOfMinute,weekday,yearBegin,yearEnd,businessYearBegin,businessYearEnd,monthBegin,monthEnd,semiMonthBegin,semiMonthEnd,businessMonthBegin,businessMonthEnd,quarterBegin,quarterEnd,quarterBusinessBegin,quarterBusinessEnd,week,lastWeekOfMonth,weekOfMonth, fy5253,fy5253Quarter,isYearStart,isYearEnd,isQuarterStart,isQuarterEnd,isMonthStart,isMonthEnd,isLeapYear,daysInMonth,weekBegin
    • String functions: isUpper,isLower,isTitle,isSpace,isAlpha,isNumeric,isDigit,isAlNum,isDecimal
    • Window functions: ewmMean,ewmStd,ewmVar,ewmCov,ewmCorr
    • Math functions: isMonotonic,isMonotonicIncreasing,isMonotonicDecreasing,quantile,quantileSeries
    • Function nunique calculates the number of unique elements in a vector
    • Function interpolate
  • SQL statements support 3 new types of hint constants: HINT_KEEPORDER, HINT_HASH and HINT_SNAPSHOT. Please refer to function sql in user manual.

  • Added functions getOS, getOSBit, parseExpr, and dayOfWeek (1.00.1)

  • Can specify the startup script through the system parameter 'startup' (1.00.1)

  • Functions cancelJob and cancelConsoleJob can cancel tasks that cannot be decomposed into subtasks in for loops (1.00.1)

  • Added function mmse(1.00.3)

  • Function replay adds the parameter of 'absoluteRate', which supports replaying data at the specified times of the speed of data generation (1.00.4)

  • Added function fill! (1.00.5)

  • Added math functions:sinh, cosh, tanh, asinh, acosh, atanh, deg2rad, rad2deg. (1.00.7)

  • Added linear programming function: linprog. (1.00.7)

  • Added function hashBucket to calculate the partition index of the data to be written, which is convenient for parallel writing. (1.00.8)

  • Added function capacity to get the capacity of a vector, i.e. the number of elements it can hold based on the current memory allocation. (1.00.9)

  • Added keyedTable. When the newly added data has the same primary key value in the keyedTable, it will overwrite the data of the same primary key. (1.00.10)

  • Added 3 new parameters for function linprog: lb, ub and method. lb represents the lower bound of the variable; ub represents the upper bound of the variable; method represents the optimization algorithm and currently supports 'simplex' and 'interior-point'. (1.00.11)

Improvement:

  • scheduleJob can call functions defined in DolphinDB modules.
  • Functions isMonotonic and isMonotonicIncreasing now return true for weakly increasing vectors; function isMonotonicDecreasing now returns true for weakly increasing vectors. (1.00.2)
  • Besides vector and matrix, functions nullFill!, bfill!, ffill! and lfill! can accept in-memory tables as an input parameter and support replacing NULL values in all columns in the entire table. (1.00.2)
  • Improved time-series aggregator engine to support handling data that are ordered within each group but not in the entire table (1.00.3).
  • The window alignment scale of the time-series aggregator engine has been extended to support the minute level. (1.00.3)
  • Improved the functions for importing text files: loadText, ploadText, loadTextEx, textChunkDS and extractTextScheama. (1.00.6)
    • Can skip a specified number of rows at the beginning of the input files.
    • Can specify a parsing format for date and time types.
    • Can import only the specified columns.
    • For a column that is specified as numeric type, non-numeric characters are ignored. If there are no numbers, a NULL value is returned (previous versions return 0).
    • Can parse comma separators in integers or floating point numbers.
    • Can specify a conversion function in loadTextEx. The imported data is processed and then appended to the database table.
  • Modified functions sum3 and sum4. When applied to a matrix, sum3 and sum4 calculate the statistics of each row instead of the entire matrix. (1.00.7)
  • Modified functions percentile and mpercentile. Now they use the interpolation method to be consistent with Python pandas instead of the nearest rank method. The interpolation method has 5 options: 'linear', 'lower', 'higher', 'midpoint' and 'nearest'. (1.00.7)
  • Improved performance of concurrent operations (query and append) on shared in-memory tables. (1.00.9)
  • Improved the efficiency of vector and matrix memory usage. (1.00.9)
  • Added checks on the number of rows of a matrix. Now it is not allowed to create a matrix with zero rows. (1.00.9)
  • Function createTimeSeriesAggregator now supports 2 new parameters: 'updateTime' and 'useWindowStartTime'. 'updateTime' can trigger calculations at intervals shorter than those specified by parameter 'step'. 'useWindowStartTime' specifies whether to use the start time or end time of moving windows as the temporal column in the output table. (1.00.10)
  • Improved deserialization for 'delete' statements by removing the requirement that the 'where' clause of a 'delete' statement must be an expression object. (1.00.10)
  • Function getSessionMemoryStat can now output the IP address and port number of the client. (1.00.10)
  • Improved function loadText. When importing a text file with only the header row and the schema is specified, an empty table is returned instead of throwing an exception. (1.00.11)
  • Improved the time-series aggregator for streaming data. If there are NULL values in the temporal column or if there are large gaps between two adjacent timestamps, the performance is not affected. (1.00.11)
  • Improved the data type recognition algorithm of the loadText function. Avoid misrecognizing numeric types as strings or symbol types due to occasional occurrences of symbols representing null values such as null, N / A, etc. (1.00.13)
  • Improved the function isDuplicated so that it can accept subarray, which is used in partitioned or in-memory tables. (1.00.13)
  • Function createPartitionedTable can now take a stream table or a mvcc table as a model table. (1.00.13)
  • Improved code deserialization. That is, when the code is deserialized, if it refers to a shared table that does not exist, an exception is no longer thrown, but the shared table is obtained by calling the function objByName, so that deserialization can continue. (1.00.13)
  • An empty subarray can be obtained by specifying the same value for the starting and the ending position for the subarray in function subarray. For example: subarray(x, 0:0). (1.00.15)
  • In function subarray, the starting or the ending position of the subarray can now be empty. For examples: subarray(x, 2 :) or subarray(x,: 5). (1.00.15)
  • Parameter 'input' of function iterate can contain NULL values. A NULL value is treated as 0 in calculation. (1.00.15)
  • Improved the performance of the function iif. In most cases, performance can be doubled. (1.00.15)
  • Function loadText supports files with carriage return ('\r') as line breaks. (1.00.15)
  • When using an empty string as an IP address, it no longer throws an exception, but returns an empty IP address. (1.00.15)
  • When the functions char, short, int, long, float and double parse strings, if the input string is empty or not a numeric value, a null value of the corresponding data type is returned Not 0. (1.00.15)
  • In the process of using the function restore data, if an error occurs, an exception will be thrown. It was only logged before. (1.00.15)
  • The function migrate adds support to restore all databases and tables in the backup folder at once. (1.00.15)
  • If the last character of the database directory parameter of the functions dropDatabase and existsDatabase is a slash or a backslash, it will be automatically removed. (1.00.15)
  • If the input of function rank is an empty vector, it returns an empty vector instead of throwing an exception. (1.00.16)
  • When the parameter 'forceDelete' of function dropPartition is set to be true, partition deletion is allowed even if the number of copies of the specified partition is 0. (1.00.16)
  • An exception is thrown if the parameter 'partitionPaths' of function dropPartition indicates filtering conditions and contains a NULL value. (1.00.16)
  • Added the restriction that functions related to DFS database operations (including addValuePartitions, addRangePartitions, append!, createPartitionedTable, createTable, database, dropDatabase, setColumnComment, setRetentionPolicy, and tableInsert) can only be executed on data nodes. (1.00.16)
  • If the 'if' branch or 'else' branch of an if/else statement contains illegal components, an exception will be thrown. (1.00.16)
  • SQL update and delete statements now support scalar-based logical expressions such as 1 = 1 or 1 = 0. (1.00.18)
  • In the table returned by getStreamingStat.subWorkers about workers of subscriber nodes, each row represents a subscription topic. (1.00.18)
  • When unsubscribing to a stream table (unsubscribeTable), all messages of the topic in the message queue of the execution thread will be deleted. (1.00.18)
  • If a SQL statement involves multiple partitions of a table, it is forbidden to use functions whose results are sensitive to the order of rows such as mavg, isDuplicated, etc. in the 'where' clause. (1.00.18)
  • In a SQL 'context by' or 'group by' statement, if there is an error in the calculation of an individual group due to the data (such as calculating the inversion of a singluar matrix), the result of the group is set to be Null and the statement will be executed successfully. The system no longer throws an exception to interrupt the execution. (1.00.18)
  • When clearing persistent data with function clearTablePersistence, the system no longer prevents other functions (such as getStreamingStat) from accessing the persistence manager. (1.00.18)
  • Improved parameter verification of function dropPartition. If paths of partititon contain duplicate values, an error message will be thrown. (1.00.19)
  • Adjusted some parameter names in functions: nunique, isDuplicated, ewmMean, ewmStd, ewmVar, ewmCovar, ewmCorr, knn, multinomialNB, gaussianNB, zTest, tTest and fTest to be consistent with the parameter naming conventions in DolphinDB. (1.00.19)
  • Improved function run by adding an optional parameter 'newSession'. If set to true (the default value is false), the script is executed in a new session, and the variables of the original session are not deleted. (1.00.19)
  • SQL update and delete statements now support scalar-based logical expressions such as 1 = 1 or 1 = 0. (1.00.19)
  • Improve the stability of partitioned tables. In particular, it solves the problem of inconsistent versions caused by repeatedly deleting tables in a database partition. (1.00.20)
  • The last joining column of aj now support 3 more data types: uuid, ipaddr and int128. (1.00.20)
  • Can backup and restore dimension tables. (1.00.20)
  • Added checks when aj or wj uses at least one partitioned table. The joining columns except the last one must include all partitioning columns. (1.00.20)
  • When a time-series streaming aggregator receives new data, check the number of columns in the new data. (1.00.20)
  • Can use table aliases in nested joins. (1.00.20)
  • Can use aliases for dimension tables in joins. (1.00.20)
  • It is forbidden to directly access the fields in the table for shared in-memory table and mvcc table through <tableName>.<colName>. You can use the field name as an index to access table fields, such as t["col1"]. (1.00.21)
  • It is forbidden to add new fields through the update statement in shared partitioned in-memory table. (1.00.21)
  • Enable TCP_KEEPALIVE when creating TCP connections between nodes in the DolphinDB cluster. (1.00.21)
  • The minimum cache size of a stream table is reduced from 100,000 rows to 1000 rows. (1.00.22)
  • The minimum allowed value of the parameter 'throttle' of function subscribeTable is reduced from 1 second to 0.001 second. (1.00.22)
  • Function dictUpdate! can be applied to a dictionary with an ANY vector as the value of the dictionary. (1.00.22)
  • Added parameter verification to function loadTable. When loading a DFS table, it is not allowed to specify the partitions to load. (1.00.22)
  • The SQL UPDATE statement now requires that the object to be updated must be a table. (1.00.23)
  • Temporal type conversion functions now support tuple as the input. The functions involved include: date, month, year, hour, minute, second, time ,datetime,datehour,timestamp,nanotime,nanotimestamp,weekday,dayOfWeek,dayOfYear,dayOfMonth,quarterOfYear,monthOfYear,weekOfYear,hourOfDay,minuteOfHour,secondOfMinute,millisecond,microsecond,nanosecond. (1.00.23)
  • Improved the stability of the distributed database. Specifically, improved the stability of transaction resolution when the chunk versions are inconsistent; reduced the chances that heartbeat transmission is delayed. (1.00.23)
  • The parameter 'groupingCol' of function contextby is allowed to be an empty array. (1.00.23)

Bug fix:

  • Fixed a bug that causes system crash when updating a table after executing function reorderColumns!.
  • Fixed a bug that causes system crash when SQL update or delete statements on partitioned in-memory tables are used with local variables. (1.00.1)
  • Fixed a memory leaking problem of Raft follower nodes. (1.00.1)
  • Fixed problems from using module when jobs are serialized to disk. (1.00.1)
  • Fixed the problem that the server may take longer than usual to restart due to check point issue. (1.00.1)
  • Fixed a crashing problem when function createTimeSeriesAggregator specified multiple keyColumns. (1.00.2)
  • Fixed the empty data problem of using loadTableBySQL to read data from a partitioned table using COMPO partition. (1.00.2)
  • When a partitioned database contains multiple tables, the cached data may be corrupted after executing dropPartition and writing data to one of the tables multiple times. (1.00.4)
  • Fixed a potential crashing issue in SQL with data prior to the year of 1970. (1.00.5)
  • Fixed a bug in assignment statements in serialized function views. If the right-hand side of the assignment statement is a constant array and the function view is executed multiple times after serialization, this array may be modified. This may lead to incorrect result or a crash. (1.00.6)
  • Fixed a bug related to function loadTable. When using loadTable to load a disk-based sequential (SEQ) partitioned table, if parameter 'partitions' is a vector with N elements, then the first N partitions are loaded instead of the partitions specified in 'partitions'. (1.00.7)
  • Fixed a scheduled job loading problem. If scheduled jobs use function views, data nodes would fail to restart as these scheduled jobs cannot be loaded. (1.00.7)
  • Fixed a bug of dropDatabase: If the data of a partitioned database only exist in a subset of data nodes in the cluster, empty chunk numbers will be written to the metadata log of the controller node when dropDatabase is executed. This will cause the controller node to fail to restart as it fails to replay the log. (1.00.7)
  • Fixed a potential memory leak problem for string arrays. If some strings in an array or column are longer than 22 bytes, the following operations involving the string array or column may cause a memory leak: (1.00.8)
    • In SQL statements, 'group by' is used on the string column and is implemented by a sorting method.
    • In SQL statements, 'order by' is used on multiple columns and the first column is the string column.
    • In SQL statements, 'pivot by' is used on the string column.
    • Apply functions pivotby, contextby, groupby, semgentby or cutpoints on the string column or array.
  • Lingpro adds parameter verification, otherwise illegal parameters may cause crash. (1.00.10)
  • Fixed a bug of function loadText: when format is specified for nanotimestamp data type, a parsing error will occur. (1.00.11)
  • Fixed the problem of duplicate key values when appending data to a keyed table. (1.00.12)
  • Fixed the problem that when applying function iif on SYMBOL columns in SQL statements, the server will crash. (1.00.12)
  • Fixed the problem that when a dimension table is deleted and then recreated, querying the table before the table is populated with data will throw an exception that the table does not exist. (1.00.12)
  • Fixed a bug: the system throws an exception about inconsistent column lengths when conducting aggregation on a SYMBOL column or a STRING column with a context by clause. (1.00.12)
  • Fix the bug that parameter hash of function subscribeTable does not work. (1.00.13)
  • Fix the bug of calling function std in the time series aggregation engine, which returns 0 instead of null when all values are the same. (1.00.13)
  • Fixed a bug where deserializing a partial application could cause the system to crash. (1.00.13)
  • Fixed a bug that when function sum or avg is used in function createTimeSeriesAggregator and all rows in a group contain NULL values, the result should be a NULL value instead of 0. (1.00.14)
  • Fixed a bug in the computation of sum or avg using a hash approach in SQL statements. If all rows in a group contain NULL values, the result should be a NULL value instead of 0. (1.00.14)
  • Fixed a bug in Windows version of DolphinDB server where closing a client subscription would cause other subscribers on the same node to fail to accept new messages. (1.00.14)
  • Fixed parsing errors for strings ending with '\\', e.g., "hello\\". It no longer throws an exception. (1.00.14)
  • Fixed the problem that if a function in a module is used in a scheduled job, the module cannot be used after server restart. (1.00.14)
  • Fixed a bug in linear programming (linprog) that the accumulation of rounding errors in iterations may lead to incorrect results. (1.00.14)
  • Fixed a bug in selecting the top rows after sorting string arrays and non-string arrays sequentially. It may lead to incorrect results of function isortTop. (1.00.14)
  • Fixed a bug where the system would register duplicate module functions when a module file is executed in the console or GUI multiple times. It may lead to system crash or thrown exceptions. (1.00.14)
  • Fixed a bug that function update! used with multiple filtering conditions generates incorrect result. (1.00.15)
  • Fixed a bug that queries throw exceptions after inserting an empty table into an empty dimension table. (1.00.15)
  • Fixed a bug with function iterate. The system may erroneously determine the parameter 'input' contains Null value, which causes parameter validation failure. (1.00.15)
  • Fixed a bug with function array. For a FLOAT or DOUBLE array, if parameter 'defaultValue' of function array is set to between 0 and 0.5, the elements of the array will be erroneously assigned the value of 0. (1.00.15)
  • Fixed a bug that in the context by query statement, if you use the wildcard field * and a custom function that returns multiple results at the same time, the query result will be incorrect. (1.00.15)
  • Fixed a bug about using order by after context by or group by. If the field to be sorted is already in the order specified by the user (no need to rearrange), the generated query result (in-memory table) will continue to be used for calculation. Fields may produce incorrect results. (1.00.15)
  • Fixed a bug that function convertEncode does not work in Linux version. (1.00.16)
  • Fixed a bug that when the parameter 'msgAsTable' of funciton subscribeTable is set to false, and if only one message in the new batch satisfies the filtering condition, a message that does not necessarily satisfies the filtering condition is sent to the client. (1.00.16)
  • Fixed a bug that execution of aggregate functions with partitioned tables may cause error of duplicate column names. For example, if MapReduce is used in the execution of a group by statement with a partitioned table, the names of intermediate columns are "col"+number, such as "col1", "col2", etc. If a group-by column happens to have the same name as an intermediate column, an error message about duplicate column names is generated. (1.00.16)
  • Fixed a bug that function loadText may parse DOUBLE type as DATE type in rare cases. (1.00.16)
  • Fixed a memory leak bug when deleting all data of a shared in-memory table if at least one column in the table is a big array). (1.00.17)
  • Fixed a bug that may cause crash when performing equal join (ej) on two shared in-memory tables. The system may crash if one thread deletes all the data of two shared in-memory tables and then adds new data, and if another thread performs equal join on them with multiple joining columns that include at least a STRING type column. (1.00.17)
  • Fixed a bug in function createCrossSectionalAggregator when the parameter triggeringPattern is set to "interval". The calculation is triggered not only at prescribed intervals, but also possibly every time data is inserted. (1.00.18)
  • Fixed a bug that may cause system crash if the parameters of partial application in a RPC call do not use the correct format. (1.00.18)
  • Fixed a bug that if a SQL query with multiple OR conditions that contain both partitioning columns and non-partitioning columns in the where clause is applied on a table with value partitioning scheme, the result may contain more rows than expected. (1.00.18)
  • Fixed a bug that function wsum returns 0 when both parameters contain only Null values. Now it returns Null. (1.00.18)
  • Fixed a bug: When both parameters 'csort' and 'limit' are specified in function sql, the generated SQL statement cannot not find the columns specified by 'csort'. (1.00.19)
  • Fixed a bug: When the hash algorithm is used to execute aggregate functions in groups in SQL statements, if the result contains Null values, the system does not set a Null value flag. Therefore, if the results are further filtered with function isNull, the system can't detect Null values. (1.00.19)
  • Fixed a bug: If the hash algorithm is used to execute aggregate function wsum in SQL group-by calculations, and if both inputs of function wsum are Null, the result should be Null instead of 0. (1.00.19)
  • Fixed a bug: When there are multiple streaming executors, executing getStreamingStat will cause the system to crash. This is a bug introduced in 1.00.18. (1.00.19)
  • Fixed memory leak caused by allocating more than 2GB to a contiguous memory block. (1.00.20)
  • Fixed a bug: when multiple batch jobs that call mr or imr are running concurrently, if an exception occurs (e.g., a partition is locked by another transaction and cannot be written to), it may cause the system to crash. (1.00.20)
  • Fixed a bug: when the time-series aggregator performs grouping calculations with useSystemTime=true, if there is no data in the windows, calculation results are erroneously generated. (1.00.20)
  • Fixed a bug with built-in concurrent hash table. This bug may cause the system to crash when creating and accessing shared variables concurrently. (1.00.20)
  • Fixed a bug: a DFS database with multiple levels of directories (e.g., dfs://stock/valueDB) cannot be properly backed up and restored. (1.00.21)
  • Fixed a bug: in equal join, if the data type of the joining column is STRING in the left table and SYMBOL in the right table, and if the right table has only 1 row, the result is incorrect in that it always return an empty table. (1.00.21)
  • Fixed a bug: in joining a DFS table and a dimension table, if all the following conditions are met: (1) no records satisfy the joining conditions; (2) wildcard (*) is used in the select clause; (3) DFS table name and the table alias used in joining are different; (4) there is a column with the same name in both tables, then the system will throw an exception that it cannot find the column with the same name in both tables. (1.00.21)
  • Fixed a bug: the results are erroneous when a large size dictionary is serialized asynchronously. (1.00.22)
  • Fixed a bug: after enabling high availability for the controller node, if a transaction involves too many partitions so the RAFT message length exceeds 64K, the metadata will be truncated when the RAFT message is replayed after restarting the system. (1.00.23)

DolphinDB GUI

  • Support synchronizing DolphinDB modules to remote servers.
  • Fixed an issue that saving password doesn't take effect.
  • Fixed the problem that module synchronization fails under Microsoft Windows OS enviroment. (1.00.10)
  • Added the web performance monitoring interface for single mode dolphindb server. (1.00.10)

DolphinDB plugin binary files

  • Plugins including AWS S3, ZLIB, MYSQL, ODBC and HDF5 are packaged under the folder "server/plugins" for this release.
  • Plugin source code
  • ODBC Plugin: The odbc::append method provides an optional parameter 'insertIgnore'. For target databases that support the 'insert ignore' syntax, when parameter 'insertIgore' is specified, duplicate data on the primary key will be ignored.
  • OPC Plugin: Added support for Chinese tags. Chinese tags must use UTF-8 encoding. (1.00.21)

DolphinDB APIs

  • JAVA

    Optimized streaming reconnection stability.

    Added function hashBucket to calculate the partition index of the data to be written, which is convenient for parallel writing. (1.00.8)

  • C++

    Optimized streaming reconnection stability.

    Added function hashBucket to calculate the partition index of the data to be written, which is convenient for parallel writing. (1.00.8)

  • go

    Added function hashBucket to calculate the partition index of the data to be written, which is convenient for parallel writing. (1.00.8)

  • C#

    Support new data types UUID and IPADDR.