Skip to content

Releases: milvus-io/milvus

milvus-2.1.0

27 Jul 11:33
1b33c73
Compare
Choose a tag to compare

v2.1.0

Release date: 2022-07-27

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.1.0 2.1.0 2.1.0-beta4 2.1.0 2.1.2

Milvus 2.1.0 not only introduces many new features including support for VARCHAR data type, memory replicas, embedded Milvus, Kafka support, and RESTful API but also greatly improves the functionality, performance, and stability of Milvus.

Features

  • Support for VARCHAR data type

Milvus now supports variable-length string as a scalar data type. Like previous scalar types, VARCHAR can be specified as an output field or be used for attribute filtering. A MARISA-trie-based inverted index is also supported to accelerate prefix query and exact match.

  • In-memory replicas

In-memory replicas enable you to load data on multiple query nodes. Like read replicas in traditional databases, in-memory replicas can help increase throughput if you have a relatively small dataset but want to scale read throughput with more hardware resources. We will support hedged read in future releases to increase availability when applying in-memory replicas.

  • Embedded Milvus

Embedded Milvus enables you to pip install Milvus in one command, try quick demos and run short scripts in Python on your Macbook, including on the ones with M1 processor.

  • Kafka support (Beta)

Apache Kafka is the most widely used open-source distributed message store. In Milvus 2.1.0, you can simply use Kafka for message storage by modifying configurations.

  • RESTful API (Beta)

Milvus 2.1.0 now provides RESTful API for applications written in PHP or Ruby. GIN, one of the most popular Golang web frameworks, is adopted as the web server.

Performance

The Milvus core team conducted a full performance benchmarking and profiling, and fixed a few bottlenecks on load/search paths. Under some test cases, Milvus search performance is boosted about 3.2 times thanks to the search combination logic.

  • #16014 Enables ZSTD compression for pulsar.
  • #16514 #17273 Improves load performance.
  • #17005 Loads binlog for different fields in parallel.
  • #17022 Adds logic for search merging and a simple task scheduler for read tasks.
  • #17194 Simplifies the merge logic of searchTask.
  • #17287 Reduces default seal proportion.

Stability

To improve stability, especially during streaming data insertion, we fixed a few critical issues including:

  • Fixed out of memory issues.
  • Fixed message queue backlog full caused by message queue subscription leakge.
  • Fixed the issue of deleted entities can still be readable.
  • Fixed data being erroneously cleaned by compaction during load or index.

Other improvements

  • Security

Starting from Milvus 2.1.0, we support username, password, and TLS connection. We also enable safe connections to our dependencies such as S3, Kafka and etcd.

  • ANTLR parser

Milvus now adopts Go ANTLR as the plan parser to make adding new grammar such as arithmetic operations on numerical fields more flexible. The adoption of ANTLR also prepares for Milvus query language support in future releases.

  • Observability

We refined monitoring metrics by adding important metrics including search QPS and latency to the new dashboard. Please notify us if any metrics critical to your production environment are not listed.

  • Deployment

For users who don't have a K8s environment but still want to deploy a cluster, Milvus now supports Ansible deployment. See Install Milvus Cluster for more information.

Known issues

  1. Partition is not a fully released feature so we recommend user not to rely on it. #17648 When a partition is dropped, the data and index cannot be cleaned.
  2. When building index after load, the collection need to released and reloaded. #17809 When an index is created on a loaded collection, the segment already loaded will not be notified to load the index.

milvus-2.0.2

02 Apr 10:12
898533c
Compare
Choose a tag to compare

v2.0.2

Release date: 2022-04-02

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.0.2 2.0.2 2.0.4 2.0.0 2.0.2

Milvus 2.0.2 is a minor bug-fix version of Milvus 2.0. We fixed multiple critical issues of collection load failure and server crash. We've also greatly boosted the query by ID performance by utilizing primary key index. The Prometheus metrics is redesigned in this version and we highly recommend you to deploy the monitoring system in production environment.

Bug fixes

  • #16338 Data coord uses VChannel when when unsubscribing to data node.
  • #16178 #15725 Query node crashes.
  • #16035 #16063 #16066 Collection load error.
  • #15932 Compaction runtime error.
  • #15823 DescribeCollection RPC fails in data node failover.
  • #15783 Recall drops after compaction.
  • #15790 Shallow copy of typeutil.AppendFieldData.
  • #15728 Query coord sets wrong watchDmchannelInfo when one partition is empty.
  • #15712 DEPLOY_MODE is got or used before set.
  • #15702 Data coord panics if message queue service quits before it.
  • #15707 Compaction generates empty segment.

Performance

Improvements

Features

milvus-2.0.1

23 Feb 10:39
ce9662c
Compare
Choose a tag to compare

v2.0.1

Release date: 2022-02-23

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.0.1 2.0.1 2.0.4 2.0.0 2.0.1

Milvus 2.0.1 is a minor bug-fix version of Milvus 2.0. The key progress of Milvus 2.0.1 includes that, first, the execution engine of Milvus knowhere was separated from the Milvus repository and moved to a new one - milvus-io/knowhere, and, second, supports were enabled for Milvus to be compiled across multiple platforms. We fixed a few critical issues that cause query node crash, index building failure, and server hang. The default dependency of Golang is upgraded to solve memory usage issues. We also upgrade the default dependency of Pulsar to solve the log4j security issue.

Improvements

  • #15491 Supports compiling and running Milvus on Mac.
  • #15453 Adds log when removing keys in garbage collector.
  • #15551 Avoids copying while converting C bytes to Go bytes.
  • #15377 Adds collectionID to the return of SearchResults and QueryResults.

Features

  • #14418 Implements automatic item expiration on compaction.
  • #15542 Implements mixed compaction logic.

Bug fixes

  • #15702 Data coord panics if message queue service quits before it closes.
  • #15663 Query node crashes on concurrent search.
  • #15580 Data node panics when compacting empty segment.
  • #15626 Failed to create index when segment size is set to large than 2GB.
  • #15497 SessionWatcher quits if not re-watch logic is provided when meeting ErrCompacted.
  • #15530 Segments under Flushing status are not treated as Flushed segment.
  • #15436 Watch DML channel failed because of no collection meta, causing load collection failure.
  • #15455 SegmentIDs is not respected when querynode.GetSegmentInfo is called.
  • #15482 EntriesNum of delta logs is not recorded correctly in segment meta.

Dependency Upgrade

  • #11393 Upgrades Golang from 1.15.2 to 1.16.9.
  • #15603 Upgrades Knowhere to 1.0.1.
  • #15580 Upgrades Pulsar from 2.7.3 to 2.8.2.

milvus-2.0.0

25 Jan 12:14
6336e23
Compare
Choose a tag to compare

v2.0.0

Release date: 2022-01-25

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.0.0 2.0.0 2.0.2 2.0.0 2.0.0

We are excited to announce the general release of Milvus 2.0 and it is now considered as production ready. Without changing the existing functionality released in the PreGA release, we fixed several critical bugs reported by users. We sincerely encourage all users to upgrade your Milvus to 2.0 release for better stability and performance.

Improvements

  • Changes the default consistency level to Bounded Staleness:
    If consistency level Strong is adopted during a search, Milvus waits until data is synchronized before the search, thus spending longer even on a small dataset. Under the the default consistency level of Bounded Staleness, newly inserted data remain invisible for a could of seconds before they can be retrieved. For more information, see Guarantee Timestamp in Search Requests.

  • #15223 Makes query nodes send search or query results by RPC.

Bug fixes

  • Writing blocked by message storage quota exceed exception:

    • #15221 Unsubscribes channel when closing Pulsar consumer.
    • #15230 Unsubscribes channel after query node is down.
    • #15284 Adds retry logic when pulsar consumer unsubscribes channel.
    • #15353 Unsubscribes topic in data coord.
  • Resource leakage:

    • #15303 Cleans flow graph if failed to watchChannel.
    • #15237 Calls for releasing memory in case that error occurs.
    • #15013 Closes payload writer when error occurs.
    • #14630 Checks leakage of index CGO object.
    • #14543 Fixes that Pulsar reader is not close.
    • #15068 Fixes that file is not close when ReadAll returns error in local chunk manager.
    • #15305 Fixes query node search exceptions will cause memory leak.
  • High memory usage:

    • #15196 Releases memory to OS after index is built.
    • #15180 Refactors flush manager injection to reduce goroutine number.
    • #15100 Fixes storage memory leak caused by runtime.SetFinalizer.
  • Cluster hang:

    • #15181 Stops handoff if the segment has been compacted.
    • #15189 Retains nodeInfo when query coord panic at loadBalanceTask.
    • #15250 Fixes collectResultLoop hang after search timeout.
    • #15102 Adds flow graph manager and event manager.
    • #15161 Panic when recover query node failed.
    • #15347 Makes index node panic when failed to save meta to MetaKV.
    • #15343 Fixes Pulsar client bug.
    • #15370 Releases collection first when drop collection.
  • Incorrect returned data:

    • #15177 Removes global sealed segments in historical.
    • #14758 Fixes that deleted data returned when handoff is done for the segment.

Known issues

  • #14077 Core dump happens under certain workload and it is still under reproducing.
    Solution: The system will be recovered automatically.
  • #15283 Cluster fails to recover because Pulsar's failure to create consumer Pulsar #13920.
    Solution: Restart pulsar cluster.
  • The default dependency Pulsar use old log4j2 version and contains security vulnerability.
    Solution: Upgrade pulsar dependency to 2.8.2. We will soon release a minor version to upgrade Pulsar to newer releases.
  • #15371 Data coord may fail to cleanup channel subscription if balance and node crash happens at same time.
    Solution: Remove the channel subscription with Pulsar admin.

milvus-2.0.0-PreGA

31 Dec 11:55
95f0e9a
Compare
Choose a tag to compare
milvus-2.0.0-PreGA Pre-release
Pre-release

v2.0.0-PreGA

Release date: 2021-12-31

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.0.0-PreGA 2.0.0rc9 2.0.0 Coming soon 1.0.20

Milvus 2.0.0-PreGA is the preview release of Milvus 2.0.0-GA. It now supports entity deletion by primary key and data compaction to purge deleted data. We also introduce a load balancing mechanism into Milvus to distribute the memory usage of each query node evenly. Some critical issues are fixed in this release, including cleanup of dropped collection data, wrong distance calculation of Jaccard distance, and several bugs that cause system hang and memory leakage.

It should be noted that Milvus 2.0.0-PreGA is NOT compatible with previous versions of Milvus 2.0.0 because of some changes made to data codec format and RocksMQ data format.

Features

  • Deleting entity: Milvus now supports deleting entities through primary keys. Whereas Milvus relies on append-only storage, it only supports logical deletion, id est, Milvus inserts a deletion mark on the entities to cover actual data so that no search or query will return the marked entities. Therefore, it should be noted that overusing deletion may cause search performance plummeting and storage usage surging. See Delete entities for more instruction.

  • Compaction: Compaction mechanism purges the deleted or expired entities in binlogs to save storage space. It is a background task that is triggered by data coord and executed by data node.

  • Automatic Loadbalance/Handoff #9481:Loadbalance mechanism distributes segments evenly across query nodes to balance the memory usage of the cluster. It can be triggered either automatically or by users. Handoff mechanism refers to that, when a growing segment is sealed, query node waits until the segment is built with index by index node and then loads the segment into memory for search or query.

Improvements

  • #12199 Parallelizes executions between segments to improve the search performance.
  • #11373 Allows batch consumption of messages in RocksMQ internal loop to improve the system efficiency.
  • #11665 Postpones the execution of handoff until index creation is completed.

Bug fixes

  • Data is not cleared on etcd, Pulsar, and MinIO when a collection is dropped:
    • #12191 Clears the metadata of the dropped segment on etcd.
    • #11554 Adds garbage collector for data coord.
    • #11552 Completes procedure of dropping collection in data node.
    • #12227 Removes all index when dropping collection.
    • #11436 Changes the default retentionSizeInMB to 8192 (8GB).
  • #11901 Wrong distances calculation caused by properties of different metric types.
  • #12511 Wrong similarity correlation caused by properties of different metric types.
  • #12225 RocksMQ produce hang when do search repeatedly
  • #12255 RocksMQ server does not close when standalone exits.
  • #12281 Error when dropping alias.
  • #11769 Update serviceableTime incorrectly.
  • #11325 Panic when reducing search results.
  • #11248 Parameter guarantee_timestamp is not working.

Other Enhancements

  • #12351 Changes proxy default RPC transfer limitation.
  • #12055 Reduces memory cost when loading from MinIO.
  • #12248 Supports more deployment metrics.
  • #11247 Adds getNodeInfoByID and getSegmentInfoByNode function for cluster.
  • #11181 Refactors segment allocate policy on query coord.

milvus-2.0.0-rc8

05 Nov 10:05
d1f4106
Compare
Choose a tag to compare
milvus-2.0.0-rc8 Pre-release
Pre-release

v2.0.0-RC8

Release date: 2021-11-5

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node SDK version
2.0.0-RC8 2.0.0rc8 Coming soon Coming soon 1.0.18

Milvus 2.0.0-RC8 is the last release candidate of Milvus 2.0.0-GA. It supports handoff task, primary key deduplication and search by Time Travel functionalities. The mean time to recovery (MTTR) has also been greatly reduced with the enhancement of timetick mechanism. We had run stress test on 2.0.0-RC8 with 10M datasets, and both standalone and distributed cluster survived for 84 hours.

Improvements

  • Failure Recovery speed:

    • #10737 Fixes Session checker for proxy.
    • #10723 Fixes seek query channel error.
    • #10907 Fixes LatestPosition option conflict with earliest patch.
    • #10616 Removes Common YAML.
    • #10771 Changes SeekPosition to the earliest of all segments.
    • #10651 Fixes query coord set seek position error.
    • #9543 Initializes global sealed segments and seek query channel when AddQueryChannel.
    • #9684 Skips re-consuming timetick MsgStream when data coord restarts.
  • Refactor meta snapshot:

    • #10288 Reduces information saved in SnapshotMeta.
    • #10703 Fixes failure when creating meta table because of compatibility issue.
    • #9778 Simplifies meta_snapshot interface.
  • #10563 Changes default balance policy.

  • #10730 Returns segment state when getting query segment information.

  • #10534 Supports reading MinIO configuration from environment variables.

  • #10114 Sets default gracefulTime to 0.

  • #9860 Hides liveChn into sessionutil and fix liveness initialization order.

  • #7115 Uses etcd to watch channel on data node.

  • #7606 Makes knowhere compile independently.

Features

  • Handoff:

    • #10330 Adds handoffTask.

    • #10084 Broadcasts sealedSegmentChangeInfo to queryChannel.

    • #10619 Fixes removing segment when query node receives segmentChangeInfo.

    • #10045 Watches changeInfo in query node.

    • #10011 Updates excluded segments info when receiving changeInfo.

    • #9606 Adds initialization information for AddQueryChannelRequest.

    • #10619 Fixes removing segment when query node receives segmentChangeInfo.

  • Primary Deduplication:

    • #10834 Removes primary key duplicated query result in query node.
    • #10355 Removes duplicated search results in proxy.
    • #10117 Removes duplicated search results in segcore reduce.
    • #10949 Uses primary key only to check search result duplication.
    • #10967 Removes primary key duplicated query result in proxy.
  • Auto-flush:

    • #10659 Adds injectFlush method for flushManager interface.
    • #10580 Adds injection logic for FlushManager.
    • #10550 Merges automatic and manual flush with same segment ID.
    • #10539 Allows flushed segments to trigger flush process.
    • #10197 Adds a timed flush trigger mechanism.
    • #10142 Applies flush manager logic in data node.
    • #10075 Uses single signal channel to notify flush.
    • #9986 Adds flush manager structure.
  • #10173 Adds binlog iterators.

  • #10193 Changes bloom filter use primary key.

  • #9782 Adds allocIDBatch for data node allocator.

Bug Fixes

  • Incorrect collection loading behavior if there is not enough memory:

    • #10796 Fixes get container mem usage.
    • #10800 Uses TotalInactiveFile in GetContainerMemUsed.
    • #10603 Increases compatibility for EstimateMemorySize interface.
    • #10363 Adds cgroups to get container memory and check index memory in segment loader.
    • #10294 Uses proto size to calculate request size.
    • #9688 Estimates memory size with descriptor event.
    • #9681 Fixes the way that binlog stores the original memory size.
    • #9628 Stores original memory size of binlog file to extra information.
  • Size of etcd-related request is too large:

    • #10909 Fixes too many operations in txn request when saving segmentInfo.
    • #10812 Fixes too large request when loading segment.
    • #10768 Fixes too large request when loading collection.
    • #10655 Splits watch operations into many transactions.
    • #10587 Compacts multiSegmentChangeInfo to a single info.
    • #10425 Trims segmentinfo binlog for VChaninfo usage.
    • #10340 Fixes multiSave childTask failed to etcd.
    • #10310 Fixes error when assigning load segment request.
    • #10125 Splits large loadSegmentReq to multiple small requests.
  • System panics:

    • #10832 Adds query mutex to fix crash with panic.
    • #10821 Index node finishes the task before index coord changed the meta.
    • #10182 Fixes panic when flushing segment.
    • #10681 Fixes query coord panic when upgrading querychannelInfo.
  • RocksMQ-related issues:

    • #10367 Stops retention gracefully.
    • #9828 Fixes retention data race.
    • #9933 Changes retention ticker time to 10 minutes.
    • #9694 Deletes messages before deleting metadata in rocksmq retention.
    • #11029 Fixes rocksmq SeekToLatest.
    • #11057 Fixes SeekToLatest memory leakage and remove redundant logic.
    • #11081 Fixes rocksdb retention ts not set.
    • #11083 Adds topic lock for rocksmq Seek.
    • #11076 Moves topic lock to the front of final delete in retention expired cleanup.
  • #10751 loadIndex keep retrying when indexFilePathInfo gets empty list.

  • #10583 ParseHybridTs returns type to INT64.

  • #10599 Delete message hash error.

  • #10314 Index building task mistakenly canceled by index coord by mistake.

  • #9701 Incorrect CreateAlias/DropAlias/AlterAlias implementation.

  • #9573 Timeout when data coord saves binlog.

  • #9788 Watch Channel canceled due to revision compacted.

  • #10994 Index node does not balances load.

  • #11152 S...

Read more

milvus-2.0.0-rc7

11 Oct 12:38
d567b21
Compare
Choose a tag to compare
milvus-2.0.0-rc7 Pre-release
Pre-release

v2.0.0-RC7

Release date: 2021-10-11

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node SDK version
2.0.0-RC7 2.0.0rc7 Coming soon Coming soon 1.0.16

Milvus 2.0.0-RC7 is a preview version of Milvus 2.0.0-GA. It supports collection alias, shares msgstream on physical channel, and changes the default MinIO and Pulsar dependencies to cluster version. Several resource leaks and deadlocks were fixed.

It should be noted that Milvus 2.0.0-RC7 is NOT compatible with previous versions of Milvus 2.0.0 because some changes made to storage format are incompatible.

Improvements

  • #8215 Adds max number of retries for interTask in query coord.

  • #9459 Applies collection start position.

  • #8721 Adds Node ID to Log Name.

  • #8940 Adds streaming segments memory to used memory in

checkLoadMemory.

  • #8542 Replaces proto.MarshalTextString with proto.Marshal.

  • #8770 Refactors flowgraph and related invocation.

  • #8666 Changes CMake version.

  • #8653 Updates getCompareOpType.

  • #8697 #8682 #8657 Applies collection start position when opening segment.

  • #8608 Changes segment replica structure.

  • #8565 Refactors buffer size calculation.

  • #8262 Adds segcore logger.

  • #8138 Adds BufferData in insertBufferNode.

  • #7738 Implements allocating msgstream from pool when creating collections.

  • #8054 Improves codes in insertBufferNode.

  • #7909 Upgrades pulsar-client-go to 0.6.0.

  • #7913 Moves segcore rows_per_chunk configuration to query_node.yaml.

  • #7792 Removes ctx from LongTermChecker.

  • #9269 Changes == to is when comparing to None in expression.

  • #8159 Make FlushSegments async.

  • #8278 Refactor rocksmq close logic and improve codecov.

  • #7797 Uses definitional type instead of raw type.

Features

  • #9579 Uses replica memory size and cacheSize in getSystemInfoMetrics.

  • #9556 Adds ProduceMark interface to return message ID.

  • #9554 Supports LoadPartial interface for DataKV.

  • #9471 Supports DescribeCollection by collection ID.

  • #9451 Stores index parameters to descriptor event.

  • #8574 Adds a round_decimal parameter for precision control to search function.

  • #8947 Rocksmq supports SubscriptionPositionLatest.

  • #8919 Splits blob into several string rows when index file is large.

  • #8914 Binlog parser tool supports index files.

  • #8514 Refactors the index file format.

  • #8765 Adds cacheSize to prevent OOM in query node.

  • #8673 #8420 #8212 #8272 #8166 Supports multiple Milvus clusters sharing Pulsar and MinIO.

  • #8654 Adds BroadcastMark for Msgstream returning Message IDs.

  • #8586 Adds Message ID return value into producers.

  • #8408 #8363 #8454 #8064 #8480 Adds session liveness check.

  • #8264 Adds description event extras.

  • #8341 Replaces MarshalTextString with Marshal in root coord.

  • #8228 Supports healthz check API.

  • #8276 Initializes the SIMD type when initializing an index node.

  • #7967 Adds knowhere.yaml to support knowhere configuration.

  • #7974 Supports setting max task number of task queue.

  • #7948 #7975 Adds suffixSnapshot to implement SnapshotKV.

  • #7942 Supports configuring SIMD type.

  • #7814 Supports bool field filter in search and query expression.

  • #7635 Supports setting segcore rows_per_chunk via configuration file.

Bug Fixes

  • #9572 Rocksdb does not delete the end key after DeleteRange is called.

  • #8735 Acked infomation takes up memory resources.

  • #9454 Data race in query service.

  • #8850 SDK raises error with a message about index when dropping collection by alias.

  • #8930 Flush occasionally gets stuck when SaveBinlogPath fails due to instant buffer removal from insertBuf.

  • #8868 Trace log catches the wrong file name and line number.

  • #8844 SearchTask result is nil.

  • #8835 Root coord crashes because of bug in pulsar-client-go.

  • #8780 #8268 #7255 Collection alias-related issues.

  • #8744 Rocksdb_kv error process.

  • #8752 Data race in mqconsumer.

  • #8686 Flush after auto-flush will not finish.

  • #8564 #8405 #8743 #8798 #9509 #8884 rocksdb memory leak.

  • #8671 Objects are not removed in MinIO when dropped.

  • #8050 #8545 #8567 #8582 #8562 tsafe-related issues.

  • #8137 Time goes backward because TSO does not load last timestamp.

  • #8461 Potential data race in data coord.

  • #8386 Incomplete logic when allocating dm channel to data node.

  • #8206 Incorrect reduce algorithm in proxy search task.

  • #8120 Potential data race in root coord.

  • #8068 Query node crashes when query result is empty and optional retrieve_ret_ is not initialized.

  • #8060 Query task panicking.

  • #8091 Data race in proxy gRPC client.

  • #8078 Data race in root coord gRPC client.

  • #7730 Topic and ConsumerGroup remain after CloseRocksMQ.

  • #8188 Logic error in releasing collections.

milvus-2.0.0-rc6

10 Sep 12:17
020f109
Compare
Choose a tag to compare
milvus-2.0.0-rc6 Pre-release
Pre-release

Release date: 2021-09-10

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node SDK version
2.0.0-RC6 2.0.0rc6 Coming soon Coming soon 1.0.16

Milvus 2.0.0-RC6 is a preview version of Milvus 2.0.0. It supports specifying shard number when creating collections, and query by expression. It exposes more cluster metrics through API. In RC6 we inceases the unit test coverage to 80%. We also fixed a series of issues involving resource leakage, system panic, etc.

Improvements

  • Increases unit test coverage to 80%.

Features

  • #7482 Supports specifying shard number when creating a collection.
  • #7386 Supports query by expression.
  • Exposes system metrics through API:
    • #7400 Proxy metrics integrate with other coordinators.
    • #7177 Exposes metrics of data node and data coord.
    • #7228 Exposes metrics of root coord.
    • #7472 Exposes more detailed metrics information.
    • #7436 Supports caching the system information metrics.

Bug Fixes

  • #7434 Query node OOM if loading a collection that beyond the memory limit.
  • #7678 Standalone OOM when recovering from existing storage.
  • #7636 Standalone panic when sending message to a closed channel.
  • #7631 Milvus panic when closing flowgraph.
  • #7605 Milvus crashed with panic when running nightly CI tests.
  • #7596 Nightly cases failed because rootcoord disconnected with etcd.
  • #7557 Wrong search result returned when the term content in expression is not in order.
  • #7536 Incorrect MqMsgStream Seek logic.
  • #7527 Dataset's memory leak in knowhere when searching.
  • #7444 Deadlock of channels time ticker.
  • #7428 Possible deadlock when MqMsgStream broadcast fails.
  • #7715 Query request overwritten by concurrent operations on the same slice.

milvus-2.0.0-rc5-hotfix1

01 Sep 11:15
Compare
Choose a tag to compare
Pre-release

Release date: 2021-09-01

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node SDK version
2.0.0-RC5 2.0.0rc5 Coming soon Coming soon 1.0.16

Milvus 2.0.0-RC5 is a preview version of Milvus 2.0.0. This hotfix solved a panic in standalone deployment.

Bug Fixes

  • #7393 Fix rocksmq retention panic when delete by message size.

milvus-2.0.0-rc5

30 Aug 11:35
1c88c8b
Compare
Choose a tag to compare
milvus-2.0.0-rc5 Pre-release
Pre-release

Release date: 2021-08-30

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node SDK version
2.0.0-RC5 2.0.0rc5 Coming soon Coming soon 1.0.16

Milvus 2.0.0-RC5 is a preview version of Milvus 2.0.0. It supports message queue data retention mechanism and etcd data cleanup, exposes cluster metrics through API, and prepares for delete operation support. RC5 also made great progress on system stability. We fixed a series of resource leakage, operation hang and the misconfiguration of standalone Pulsar under Milvus cluster.

Improvements

  • #7226 Refactors data coord allocator.
  • #6867 Adds connection manager.
  • #7172 Adds a seal policy to restrict the lifetime of a segment.
  • #7163 Increases the timeout for gRPC connection when creating index.
  • #6996 Adds a minimum interval for segment flush.
  • #6590 Saves binlog path in SegmentInfo.
  • #6848 Removes RetrieveRequest and RetrieveTask.
  • #7102 Supports vector field as output.
  • #7075 Refactors NewEtcdKV API.
  • #6965 Adds channel for data node to watch etcd.
  • #7066 Optimizes search reduce logics.
  • #6993 Enhances the log when parsing gRPC recv/send parameters.
  • #7331 Changes context to correct package.
  • #7278 Enables etcd auto compaction for every 1000 revision.
  • #7355 Clean fmt.Println in util/flowgraph.

Features

  • #7112 #7174 Imports an embedded etcdKV (part 1).
  • #7231 Adds a segment filter interface.
  • #7157 Exposes metrics of index coord and index nodes.
  • #7137 #7157 Exposes system topology information by proxy.
  • #7113 #7157 Exposes metrics of query coord and query nodes.
  • #7134 Allows users to get vectors using memory instead of local storage.
  • #6617 Supports retention for rocksmq.
  • #7303 Adds query node segment filter.
  • #7304 Adds delete API into proto.
  • #7261 Adds delete node.
  • #7268 Constructs Bloom filter when inserting.

Bug Fixes

  • #7272 #7352 #7335 Failure to start new docker container with existing volumes if index was created: proxy is not healthy.
  • #7243 Failure to create index in a new version of Milvus for data that were inserted in an old version.
  • #7253 Search gets empty results after releasing a different partition.
  • #7244 #7227 Proxy crashes when receiving empty search results.
  • #7203 Connection gets stuck when gRPC server is down.
  • #7188 Incomplete unit test logics.
  • #7175 Unspecific error message returns when calculating distances using collection IDs without loading.
  • #7151 Data node flowgraph does not close caused by missing DropCollection.
  • #7167 Failure to load IVF_FLAT index.
  • #7123 Timestamp go back for timeticksync.
  • #7140 calc_distance returns wrong results for binary vectors when using TANIMOTO metrics.
  • #7143 The state of memory and etcd is inconsistent if KV operation fails.
  • #7141 #7136 Index building gets stuck when the index node pod is frequently killed and pulled up.
  • #7119 Pulsar msgStream may get stuck when subscribed with the same topic and sub name.
  • #6971 Exception occurs when searching with index (HNSW).
  • #7104 Search gets stuck if query nodes only load sealed segment without watching insert channels.
  • #7085 Segments do not auto flush.
  • #7074 Index nodes wait for index coord to start to complete.
  • #7061 Segment allocation does not expire if data coord does not receive timetick message from data node.
  • #7059 Query nodes get producer leakage.
  • #7005 Query nodes do not return error to query coord when loadSegmentInternal fails.
  • #7054 Query nodes return incorrect IDs when topk is larger than row_num.
  • #7053 Incomplete allocation logics.
  • #7044 Lack of check on unindexed vectors in memory before retriving vectors in local storage.
  • #6862 Memory leaks in flush cache of data node.
  • #7346 Query coord container exited in less than 1 minute when re-installing Milvus cluster.
  • #7339 Incorrect expression boundary.
  • #7311 Collection nil when adding query collection.
  • #7266 Flowgraph released incorrectly.
  • #7310 Excessive timeout when searching after releasing and loading a partition.
  • #7320 Port conflicts between embedded etcd and external etcd.
  • #7336 Data node corner cases.