NOTE:
Starting from v0.6.0, All objects are defined in generic which accept net.Socket
or asyncnet.AsyncSocket
only. All APIs are following this change and also defined as generic too.
For users, the only things that they have to change is the adding the type Mongo
, Database
, Query
, Cursor
,
Collection
which socket type it'd be working. E.g.
# Previous
var m: Mongo
#or
var m = newMongo(urlconn)
# to be
var m: Mongo[AsyncSocket] # or Mongo[Socket] for using net.Socket
# or
var m = newMongo[AsyncSocket](urlconn)
Other than those, any other APIs should work the same since all work for both Socket and AsyncSocket.
Mongodb is a document-based key-value database which emphasize in high performance read and write capabilities together with many strategies for clustering, consistency, and availability.
Anonimongo is a driver for Mongodb developed using pure Nim. As library, it's developed to enable developers to be able to access and use Mongodb in projects using Nim. Several implementad as APIs for low-level which works directly with Database and higher level APIs which works with Collection. Any casual user just have to access the Collection API directly instead of working with various Database operations.
The APIs are closely following Mongo documentation with a bit variant for explain
API. Each supported
command for explain
added optional string value to indicate the verbosity of explain
command.
By default, it's empty string which also indicate the command operations working without the need to
explain
the queries. For others detailed caveats can be found here.
All APIs are generic which work asynchronous by using AsyncSocket or synchronously using Socket.
In case of using AsyncSocket, the user must await
or waitfor
depend on scope. When using Socket, it's not
needed.
Any API that insert/update/delete will return WriteResult. So the user can check
whether the write operation is successful or not with its field boolean success
and the field
string reason
to know what's the error message returned. However it's still throwing any other
error such as
MongoError
(the failure related to Database APIs and Mongo APIs),BsonFetchError
(getting the wrong Bson type from BsonBase),KeyError
(accessing non-existent key BsonDocument or embedded document in BsonBase),IndexError
(accessing index more than BsonArray length or BsonBase that's actually BsonArray length),IOError
(related to socket),TimeoutError
(when connecting withmongodb+srv
scheme URI)
which are raised by the underlying process. Those errors are indicating an err in the program flow hence elevated on how the user handles them.
This page (anonimongo.html
) is the elaborate documentation. It also explains several
modules and the categories for those modules. The index also available.
There are two ways to define Bson object (newBson
and bson
) but it's preferable use bson
as newBson
is the low-level object definition. Users can roll-out their own operator like
in example below.
For creating client, it's preferable to use newMongo
overload with MongoUri
argument because
the MongoUri
overload has better support for various client options such as
AppName
for identifying your application when connecting to Mongodb server.readPreference
which supportprimary
(default),primaryPreferred
,secondary
,secondaryPreferred
.w
(as write concern option).retryableWrites
which can be supplied withfalse
(default) ortrue
.compressors
which support list of compressor:snappy
andzlib
.authSource
that point which database we want to authenticate.
This won't be used if in theMongoUri
users provide the path to the database intended.
So the database source in case ofMongoUri
"mongodb://localhost:27017/not-admin?authSource=admin"
is"not-admin"
.ssl
ortls
which can befalse
(default if not usingmongodb+srv
scheme), or true (default if usingmongodb+srv
scheme).tlsInsecure
,tlsAllowInvalidCertificates
,tlsAllowInvalidHostnames
. Please refer to Mongodb documentation as these 3 options have elaborate usage. In most cases, users don't have to bother about these 3 options.
All above options parameters are case-insensitive but the values are not because Mongodb server is not accepting case-insensitive values.
import times
import anonimongo
var mongo = newMongo[AsyncSocket](poolconn = 16) # default is 64
if not waitFor mongo.connect:
# default is localhost:27017
quit "Cannot connect to localhost:27017"
var coll = mongo["temptest"]["colltest"]
let currtime = now().toTime()
var idoc = newseq[BsonDocument](10)
for i in 0 .. idoc.high:
idoc[i] = bson {
datetime: currtime + initDuration(hours = i),
insertId: i
}
# insert documents
let writeRes = waitfor coll.insert(idoc)
if not writeRes.success:
echo "Cannot insert to collection: ", coll.name
else:
echo "inserted documents: ", writeRes.n
let id5doc = waitfor coll.findOne(bson {
insertId: 5
})
doAssert id5doc["datetime"] == currtime + initDuration(hours = 5)
# we define our own operator `!>` for this example only.
template `!>`(b: untyped): BsonDocument = bson(b)
# find one and modify, return the old document by default
let oldid8doc = waitfor coll.findAndModify(
!>{ insertId: 8},
!>{ "$set": { insertId: 80 }})
# find documents with combination of find which return query and one/all/iter
let query = coll.find()
query.limit = 5.int32 # limit the documents we'll look
let fivedocs = waitfor query.all()
doAssert fivedocs.len == 5
# let's iterate the documents instead of bulk find
# this iteration is bit different with iterating result
# from `query.all` because `query.iter` is returning `Future[Cursor]`
# which then the Cursor has `items` iterator.
var count = 0
for doc in waitfor query.iter:
inc count
doAssert count == 5
# find one document, which newly modified
let newid8doc = waitfor coll.findOne(bson { insertId: 80 })
doAssert oldid8doc["datetime"].ofTime == newid8doc["datetime"]
# remove a document
let delStat = waitfor coll.remove(bson {
insertId: 9,
}, justone = true)
doAssert delStat.success # must be true if query success
doAssert delStat.kind == wkMany # remove operation returns
# the WriteObject result variant
# of wkMany which withhold the
# integer n field for n affected
# document in successfull operation
doAssert delStat.n == 1 # number of affected documents
# count all documents in current collection
let currNDoc = waitfor coll.count()
doAssert currNDoc == (idoc.len - delStat.n)
close mongo
import std/net
import anonimongo
# note in this example we're using net.Socket instead of AsyncSocket
var mongo = newMongo[Socket]()
if not mongo.connect:
quit "Cannot connect to localhost:27017"
# change to :SHA1Digest for SCRAM-SHA-1 mechanism
if not mongo.authenticate[:SHA256Digest](username, password):
quit "Cannot login to localhost:27017"
close mongo
# Authenticating using URI
mongo = newMongo[Socket](MongoUri("mongodb://username:password@domain-host/admin"))
if not waitfor mongo.connect:
quit "Cannot connect to domain-host"
if not mongo.authenticate[:SHA256Digest]():
quit "Cannot login to domain-host"
close mongo
# need to compile with -d:ssl option to enable ssl
import strformat
import anonimongo
let uriserver = "mongo://username:password@localhost:27017/"
let sslkey = "/path/to/ssl/key.pem"
let sslcert = "/path/to/ssl/cert.pem"
let urissl = &"{uriserver}?tlsCertificateKeyFile=certificate:{encodeURL sslcert},key:{encodeURL sslkey}"
let connectToAtlast = "mongo+srv://username:password@atlas-domain/admin"
let multipleHostUri = "mongo://uname:passwd@domain-1,uname:passwd@domain-2,uname:passwd@domain-3/admin"
# uri ssl connection
var mongo = newMongo[AsyncSocket](MongoUri urissl)
close mongo
# or for `mongo+srv` connection scheme
mongo = newMongo[AsyncSocket](MongoUri connectToAtlas)
close mongo
# for multipleHostUri
mongo = newMongo(MongoUri multipleHostUri)
close mongo
# custom DNS server and its port
# by default it's: `dnsserver = "8.8.8.8"` and `dnsport = 53`
mongo = newMongo[AsyncSocket](
MongoUri connectToAtlas,
dnsserver = "1.1.1.1",
dnssport = 5000)
close mongo
In the test_replication_sslcon.nim, there's example of emulated
DNS server custom for SRV
of DNS seedlist lookup. So the URI to connect is localhost:5000
which in return replying with
localhost:27018
, localhost:27019
and localhost:27020
as domain of replication set.
# this time the server doesn't need SSL/TLS or authentication
# gridfs is useful when the file bigger than a document capsize 16 megabytes
import std/net
import anonimongo
var mongo = newMongo[Socket]()
doAssert mongo.connect
var grid = mongo["target-db"].createBucket() # by default, the bucket name is "fs"
let res = grid.uploadFile("/path/to/our/file")
if not res.success:
echo "some error happened: ", res.reason
var gstream = grid.getStream("our-available-file")
let data = gstream.read(5.megabytes) # reading 5 megabytes of binary data
doAssert data.len == 5.megabytes
close gstream
close mongo
import times
import anonimongo/core/bson # if we only need to work with Bson
var simple = bson({
thisField: "isString",
embedDoc: {
embedField1: "unicodeこんにちは異世界",
"type": "cannot use any literal or Nim keyword except string literal or symbol",
`distinct`: true, # this is acceptable make distinct as symbol using `
# the trailing comma is accepted
embedTimes: now().toTime,
},
"1.2": 1.2,
arraybson: [1, "hello", false], # heterogenous elements
})
doAssert simple["thisField"] == "isString"
doAssert simple["embedDoc"]["embedField1"] == "unicodeこんにちは異世界"
# explicit fetch when BsonBase cannot be automatically converted.
doAssert simple["embedDoc"]["distinct"].ofBool
doAssert simple["1.2"].ofDouble is float64
# Bson support object conversion too
type
IntString = object
field1*: int
field2*: string
var bintstr = bson {
field1: 1000,
field2: "power-level"
}
let ourObj = bintstr.to IntString
doAssert ourObj.field1 == 1000
doAssert ourObj.field2 == "power-level"
import anonimongo/core/bson
type
Obj1 = object
str: string
`int`: int
`float`: float
proc toBson(obj: Obj1): BsonDocument =
result = bson()
for k, v in obj.fieldPairs:
result[k] = v
let obj1 = Obj1(
str: "test",
`int`: 42,
`float`: 42.0
)
let obj1doc = obj1.toBson
doAssert obj1doc["str"] == obj1.str
doAssert obj1doc["int"] == obj1.`int`
doAssert obj1doc["float"] == obj1.`float`
The converting example above can be made generic like:
proc toBson[T: tuple | object](o: T): BsonDocument =
result = bson()
for k, v in o.fieldPairs:
result[k] = v
But according to the fieldPairs
documentation, it can only support
tuple and object so if the user is working with ref object, they can
only convert it manually.
Above toBson
snippet code can be modified to accomodate the bsonKey
pragma
(starting from v0.4.5
) to be:
import macros
import anonimongo/core/bson
proc toBson[T: tuple | object](o: T): BsonDocument:
result = bson()
for k, v in o.fieldPairs:
when v.hasCustomPragma(bsonKey):
var key = v.getCustomPragmaVal(bsonKey)
result[key] = v # or v.toBson for explicit conversion
else:
result[k] = v # or v.toBson for explicit conversion
Check tests for more examples of detailed usages.
Elaborate Bson examples and cases are covered in bson_test.nim
# Below example is almost the same with test code from
# `test_bson_test.nim` file in tests
import anonimongo/core/bson
type
OVKind = enum
ovOne ovMany ovNone
EmbedObjectVariant = object
field1*: int
field2*: string
truthy {.bsonExport.}: bool
RefEmbedObjVariant = ref EmbedObjectVariant
ObjectVariant = object
baseField*: string
baseInt*: int
baseEmbed*: BsonDocument
case kind*: OVKind
of ovOne:
theOnlyField*: string
of ovMany:
manyField1*: string
intField*: int
embed*: EmbedObjectVariant
refembed*: RefEmbedObjVariant
of ovNone:
nil
OuterObject = ref object
variant {.bsonExport, bsonKey: "objectVariant".}: ObjectVariant
# our Bson data
var bov = bson({
baseField: "this is base string",
baseInt: 3453,
kind: "ovMany",
manyField1: "example of ovMany",
intField: 42,
embed: {
truthy: true,
},
refembed: {
truthy: true,
},
})
var outb = bson { objectVariant: bov }
# let's see if it's converted to OVKind ovMany
var outer: OuterObject
let objmany = bov.to ObjectVariant
outer = outb.to OuterObject
doAssert objmany.kind == ovMany
doAssert objmany.baseField == bov["baseField"]
doAssert objmany.baseInt == bov["baseInt"]
doAssert objmany.embed.truthy
doAssert objmany.refembed.truthy
doAssert objmany.manyField1 == bov["manyField1"]
doAssert objmany.intField == bov["intField"]
doAssert outer.variant.kind == ovMany
doAssert outer.variant.baseField == "this is base string"
doAssert outer.variant.baseInt == 3453
doAssert outer.variant.baseEmbed.isNil
# let's change the kind to "ovOne"
let onlyFieldMsg = "this is dynamically added"
bov["kind"] = "ovOne"
bov["theOnlyField"] = onlyFieldMsg
outb.mget("objectVariant")["kind"] = "ovOne"
outb.mget("objectVariant")["theOnlyField"] = onlyFieldMsg
let objone = bov.to ObjectVariant
outer = outb.to OuterObject
doAssert objone.kind == ovOne
doAssert objone.baseField == bov["baseField"]
doAssert objone.theOnlyField == "this is dynamically added"
doAssert outer.variant.kind == ovOne
doAssert outer.variant.theOnlyField == onlyFieldMsg
# lastly, convert to "ovNone"
bov["kind"] = "ovNone"
outb.mget("objectVariant")["kind"] = "ovNone"
let objnone = bov.to ObjectVariant
outer = outb.to OuterObject
doAssert objnone.kind == ovNone
doAssert outer.variant.kind == ovNone
# This example will show to extract a specific Bson key
# check the test_bson_test.nim for elaborate bsonKey example
# Available since v0.4.5
import oids, times, macros
import anonimongo/core/bson
type
SimpleIntString = object
intfield {.bsonExport.}: int
strfield*: string
OidString = string # provided to enable our own custom conversion definition
CustomObj = object
# we retrieve the same "_id" into `id` and `idStr` with
# `idStr` defined with specific conversion proc
id {.bsonExport, bsonKey: "_id".}: Oid
idStr {.bsonExport, bsonKey: "_id".}: OidString
sis {.bsonExport, bsonKey: "sisEmbed".}: SimpleIntString
currentTime {.bsonExport, bsonKey: "now".}: Time
proc ofOidString(b: BsonBase): OidString =
echo "ofOidString is called"
result = $b.ofObjectId
let bobj = bson({
"_id": genOid(),
sisEmbed: {
intfield: 42,
strfield: "forthy two"
},
now: now().toTime,
})
var cobj: CustomObj
expandMacros:
cobj = bobj.to CustomObj
doAssert $cobj.id == $bobj["_id"].ofObjectId
doAssert cobj.idStr == $bobj["_id"].ofObjectId
doAssert $cobj.id == cobj.idStr
doAssert cobj.sis.strfield == bobj["sisEmbed"]["strfield"]
doAssert cobj.currentTime == bobj["now"]
It's often so handy to work directly between Json and Bson. As currently, there's no direct support for
converting directly Json to Bson and vice versa. However this snippet example will come useful for generic
conversion between Json and Bson. This snippet example needs the Anonimongo since v0.4.8
(patch for working with BsonArray
).
import json, times
import anonimongo/core/bson
let jsonobj = %*{
"businessName":"TEST",
"businessZip":"55555",
"arr": [1, 2, 3, 4]
}
proc toBson(j: JsonNode): BsonDocument
proc convertElem(v: JsonNode): BsonBase =
case v.kind
of JInt: result = v.getInt
of JString: result = v.getStr
of JFloat: result = v.getFloat
of JObject: result = v.toBson
of JBool: result = v.getBool
of JNull: result = bsonNull()
of JArray:
var arrval = bsonArray()
for elem in v:
arrval.add elem.convertElem
result = arrval
proc toBson(j: JsonNode): BsonDocument =
result = bson()
for k, v in j:
result[k] = v.convertElem
let bobj = jsonobj.toBson
doAssert bobj["businessName"] == jsonobj["businessName"].getStr
doAssert bobj["businessZip"] == jsonobj["businessZip"].getStr
doAssert bobj["arr"].len == jsonobj["arr"].len
proc toJson(b: BsonDocument): JsonNode
proc convertElem(v: BsonBase): JsonNode =
case v.kind
of bkInt32, bkInt64: result = newJInt v.ofInt
of bkString: result = newJString v.ofString
of bkBinary: result = newJString v.ofBinary.stringbytes
of bkBool: result = newJBool v.ofBool
of bkDouble: result = newJFloat v.ofDouble
of bkEmbed: result = v.ofEmbedded.toJson
of bkNull: result = newJNull()
of bkTime: result = newJString $v.ofTime
of bkArray:
var jarray = newJArray()
for elem in v.ofArray:
jarray.add elem.convertElem
result = jarray
else:
discard
proc toJson(b: BsonDocument): JsonNode =
result = newJObject()
for k, v in b:
result[k] = v.convertElem
let jobj = bobj.toJson
doAssert jobj["businessName"].getStr == jsonobj["businessName"].getStr
doAssert jobj["businessZip"].getStr == jsonobj["businessZip"].getStr
doAssert jobj["arr"].len == jsonobj["arr"].len
Above example we convert the jsonobj
(JsonNode
) to bobj
(BsonDocument
)
and convert again from bobj
to jobj
(JsonNode
). This should be useful
for most cases of working with Bson and Json.
This example is the example of changeStream operation. In this example we will
watch a collection and print the change to the console. It will stop when
there's delete
or collection drop
operation.
import sugar
import anonimongo
## watching feature can only be done to the replica set Mongodb server so users
## need to run the available replica set first
proc main =
var mongo = newMongo[AsyncSocket](
MongoUri "mongodb://localhost:27018,localhost:27019,localhost27020/admin"
poolconn = 2)
defer: close mongo
if not waitfor mongo.connect:
echo "failed to connect, quit"
return
var cursor: Cursor
let db = mongo["temptest"]
# we are create the collection explicitly
dump waitfor db.create("templog")
# the namespace will be `temptest.templog`
let coll = db["templog"]
# we try to watch the collection, there's possible error
# for example the collection is not in replica set database,
# or the invalid options, in this example we simply do nothing
# to handle the error except printing it out to the screen
try:
cursor = waitfor coll.watch()
except MongoError:
echo "cannot watch the cursor"
echo getCurrentExceptionMsg()
return
var lastChange: ChangeStream
# we define our callback for how we're going to handle it,
# in this example we dump the change info to the screen
# and using the closure to assign the value to the
# `lastChange` variable
# With `stopWhen = {csDelete, csDrop}`, the loop of watch
# will break when there's delete operation or the collection
# is dropped.
# note that in the current example, we just only want to
# watch the collection so we `waitFor` it to end, in case
# we want to do the other thing we can run the `cursor.forEach`
# in the background.
waitFor cursor.forEach(
proc(cs: ChangeStream) = dump cs; lastChange = cs,
stopWhen = {csDelete, csDrop})
dump lastChange
#doAssert lastChange.operationType == csDelete
dump waitfor coll.drop
main()
Head over to Todolist Example.
Check Upload-file Example.
Various examples while measuring here.
Bson module has some functionalities to convert to and from the Object. However there are some points to be aware:
-
The
to
macro is working exlusively converting object typedesc converting basic types already supplied withofType
(withType
isInt|Int32|Int64|Double|String|Time|Embedded|ObjectId
). -
User can provide the custom proc, func, or converter with pattern of
of{Typename}
which accepting aBsonBase
and returnTypename
. For example:
import macros
import anonimongo/core/bson
type
Embedtion = object
embedfield*: int
embedstat*: string
wasProcInvoked: bool
SimpleEmbedObject = object
intfield*: int
strfield*: string
embed*: Embedtion
proc ofEmbedtion(b: BsonBase): Embedtion =
let embed = b.ofEmbedded
result.embedfield = embed["embedfield"]
result.embedstat = embed["embedstat"]
result.wasProcInvoked = true
let bsimple = bson({
intfield: 42,
strfield: "that's 42",
embed: {
embedfield: 42,
embedstat: "42",
},
})
var simple: SimpleEmbedObject
expandMacros:
simple = bsimple.to SimpleEmbedObject
doAssert simple.intfield == 42
doAssert simple.strfield == "that's 42"
doAssert simple.embed.embedfield == 42
doAssert simple.embed.embedstat == "42"
doAssert simple.embed.wasProcInvoked
Note that the conversion to
SimpleEmbedObject
with ofSimpleEmbedObject
custom proc,
func, or converter isn't checked as it becomes meaningless to use to
macro
when the user can simply calling it directly. So any most outer type won't check whether
the user provides of{MostOuterTypename}
implementation or not.
-
Auto Bson to Type conversion can only be done to the fields that are exported or has custom pragma
bsonExport
as shown in this example. -
It potentially breaks when there's some arbitrary hierarchy of types definition. While it can handle any deep ofdistinct
types (that's distinct of distinct of distinct of .... of Type), but this should be indication of some broken type definition and better be remedied from the type design itself. If user thinks otherwise, please report this issue with the example code.
As with v2 ofto
macro, the conversion of arbitraryref
anddistinct
is supported. It cannot support theref distinct Type
as it's not making any sense but it supports thedistinct ref Type
. Please report the issue if user finds the otherwise. -
In case the user wants to persist with the current definition of any custom deep of
distinct
type, user should define the custom mechanism mentioned in point #1 above. -
With
v0.4.5
, users are able to extract custom Bson key to map with specific field name by supplying the pragmabsonKey
e.g.{.bsonKey: "theBsonKey".}
. Refer to the example above. The key is case-sensitive. -
to
macro doesn't support for cyclic object types. -
As mentioned in point #1, the
to
macro is working exclusively converting the defined object type. As pointed out that here, issue/10,to
make it generic, it's reasonable for uniformedto
convert anytypedesc
. Because theofType
variants for basic types are implemented asconverters
, it's easy for user to supply theto
overload:
template to(b: BsonBase, name: typedesc): untyped =
var r: name = b
move r
There's no plan to add this snippet to the library but it maybe changed in later version.
- Any form of field type
Option[T]
is ignored. Refer to point #2 (defining the usersofTypename
) to support automatic conversion. For example, the Bson field we received can haveint
ornull
so we implement it:
type
# note that we need an intermediate alias type name as `to` only knows
# the symbol for custom proc conversion.
OptionalInt = Option[int]
TheObj = object
optint {.bsonExport.}: OptionalInt
optstr {.bsonExport.}: Option[string]
let intexist = bson {
optint: 42,
optstr: "this will be ignored",
}
let intnull = bson {
optint: bsonNull(),
optstr: "not converted",
}
proc ofOptionalInt(b: BsonBase): Option[int] =
if b.kind == bkInt32: result = some b.ofInt
else: result = none[int]()
# or we can
# elif b.kind == bkNull: result = none[int]()
# just for clarity that it can have BsonInt32 or BsonNull
# as its value from Bson
let
haveint = intexist.to TheObj
noint = intnull.to TheObj
doAssert haveint.optint.isSome
doAssert haveint.optint.get == 42
doAssert haveint.optstr.isNone
doAssert noint.optint.isNone
doAssert noint.optstr.isNone
-
Conversion to generic object and generic field type are not tested. Very likely it will break the whole
to
conversion. -
Object fields conversion doesn't support when the fields are grouped together, for example:
type
SStr = object
ss1*, ss2*: string
SOkay = object
ss1*: string
ss2*: string
let bstr = bson {
ss1: "string 1",
ss2: "string 2",
}
# The compiler will complain that "node" has no type.
let sstr = bstr.to SStr
# This works because the `ss1` and `ss2` aren't grouped together
let sokay = bstr.to SOkay
Since each field can have differents pragma definition, it's always preferable to define each field as its own.
Anonimongo requires minimum Nim version of v1.4.0
.
For installation, we can choose several methods will be mentioned below.
Using Nimble package:
nimble install anonimongo
Or to install it locally
git clone https://github.com/mashingan/anonimongo
cd anonimongo
nimble develop
or directly from Github repo
nimble install https://github.com/mashingan/anonimongo
to install the #head
branch
nimble install https://github.com/mashingan/anonimongo@#head
#or
nimble install anonimongo@#head
The code in #head
is always in tagged version. Untagged #head
master branch
is usually only changes in something unrelated to the code itself.
requires "anonimongo"
or directly from Github repo
requires "https://github.com/mashingan/anonimongo"
This implemented APIs for Mongo from Mongo reference manual and mongo spec.
- ✔️ Driver for Mongo 6 and up
- ✔️ URI connect
- ✔️ Multiquery on URI connect
- ✔️ Multihost on URI connect
- 🔳 Multihost on simple connect
- ✔️ SSL/TLS connection
- ✔️ SCRAM-SHA-1 authentication
- ✔️ SCRAM-SHA-256 authentication
- ✔️
isMaster
connection - ✔️
TailableCursor
connection - ✔️
SlaveOk
operations - ✔️ Compression connection
- ✔️ Retryable writes
- 🔳 Retryable reads
- 🔳 Sessions
✅ Aggregation commands 4/4 Mongo doc Anonimongo module
- ✔️
aggregate
(collection procs:aggregate
) - ✔️
count
(collection procs:count
) - ✔️
distinct
(collection procs:distinct
) - ✔️
mapReduce
(db procs:mapReduce
)
✅ Geospatial command 1/1 Mongo doc Anonimongo module
- ✔️
geoSearch
(db procs:geoSearch
)
✅ Query and write operations commands 7/7 (8) Mongo doc Anonimongo module
- ✔️
delete
(collection procs:remove
,remove
,remove
) - ✔️
find
(collection procs:find
,findOne
,findAll
,findIter
) - ✔️
findAndModify
(collection procs:findAndModify
) - ✔️
getMore
(db procs:getMore
) - ✔️
insert
(collection procs:insert
) - ✔️
update
(collection procs:update
) - ✔️
getLastError
(db procs:getLastError
) - 🔳
resetError
(deprecated)
❌ Query plan cache commands 0/6 Mongo doc Anonimongo module
- 🔳
planCacheClear
- 🔳
planCacheClearFilters
- 🔳
planCacheListFilters
- 🔳
planCacheListPlans
- 🔳
planCacheListQueryShapes
- 🔳
planCacheSetFilter
☑️ Database operations commands 1/3 Mongo doc Anonimongo module
- ✔️
authenticate
, implemented as Mongo proc. (authenticate
,authenticate
) - 🔳
getnonce
- 🔳
logout
✅ User management commands 7/7 Mongo doc Anonimongo module
- ✔️
createUser
(db procs:createUser
) - ✔️
dropAllUsersFromDatabase
(db:procsdropAllUsersFromDatabase
) - ✔️
dropUser
(db procs:dropUser
) - ✔️
grantRolesToUser
(db procs:grantRolesToUser
) - ✔️
revokeRolesFromUser
(db procs:revokeRolesFromUser
) - ✔️
updateUser
(db procs:updateUser
) - ✔️
usersInfo
(db procs:usersInfo
)
✅ Role management commands 10/10 Mongo doc Anonimongo module
- ✔️
createRole
(db procs:createRole
) - ✔️
dropRole
(db procs:dropRole
) - ✔️
dropAllRolesFromDatabase
(db procs:dropAllRolesFromDatabase
) - ✔️
grantPrivilegesToRole
(db procs:grantPrivilegesToRole
) - ✔️
grantRolesToRole
(db procs:grantRolesToRole
) - ✔️
invalidateUserCache
(db procs:invalidateUserCache
) - ✔️
revokePrivilegesFromRole
(db procs:revokePrivilegesFromRole
) - ✔️
revokeRolesFromRole
(db procs:revokeRolesFromRole
) - ✔️
rolesInfo
(db procs:rolesInfo
) - ✔️
updateRole
(db procs:updateRole
)
✅ Replication commands 12/12(13) Mongo doc Anonimongo module
- 🔳
applyOps
(internal command) - ✔️
isMaster
(db procs:isMaster
) - ✔️
replSetAbortPrimaryCatchUp
(db procs:replSetAbortPrimaryCatchUp
) - ✔️
replSetFreeze
(db procs:replSetFreeze
) - ✔️
replSetGetConfig
(db procs:replSetGetConfig
) - ✔️
replSetGetStatus
(db procs:replSetGetStatus
) - ✔️
replSetInitiate
(db procs:replSetInitiate
) - ✔️
replSetMaintenance
(db procs:replSetMaintenance
) - ✔️
replSetReconfig
(db procs:replSetReconfig
) - ✔️
replSetResizeOplog
(db procs:replSetResizeOplog
) - ✔️
replSetStepDown
(db procs:replSetStepDown
) - ✔️
replSetSyncFrom
(db procs:replSetSyncFrom
)
❌ Sharding commands 0/27 Mongo doc Anonimongo module
- 🔳
addShard
- 🔳
addShardToZone
- 🔳
balancerStart
- 🔳
balancerStop
- 🔳
checkShardingIndex
- 🔳
clearJumboFlag
- 🔳
cleanupOrphaned
- 🔳
enableSharding
- 🔳
flushRouterConfig
- 🔳
getShardMap
- 🔳
getShardVersion
- 🔳
isdbgrid
- 🔳
listShard
- 🔳
medianKey
- 🔳
moveChunk
- 🔳
movePrimary
- 🔳
mergeChunks
- 🔳
removeShard
- 🔳
removeShardFromZone
- 🔳
setShardVersion
- 🔳
shardCollection
- 🔳
shardCollection
- 🔳
split
- 🔳
splitChunk
- 🔳
splitVector
- 🔳
unsetSharding
- 🔳
updateZoneKeyRange
❌ Session commands 0/8 Mongo doc Anonimongo module
- 🔳
abortTransaction
- 🔳
commitTransaction
- 🔳
endSessions
- 🔳
killAllSessions
- 🔳
killAllSessionByPattern
- 🔳
killSessions
- 🔳
refreshSessions
- 🔳
startSession
☑️ Administration commands 15/32 Mongo doc Anonimongo module
- 🔳
clean
(internal namespace command) - 🔳
cloneCollection
- 🔳
cloneCollectionAsCapped
- 🔳
collMod
- 🔳
compact
- 🔳
connPoolSync
- 🔳
convertToCapped
- ✔️
create
(db procs:create
) - ✔️
createIndexes
(collection proc:createIndexes
) - ✔️
currentOp
(db procs:currentOp
) - ✔️
drop
(collection procs:drop
) - ✔️
dropDatabase
(db procs:dropDatabase
) - 🔳
dropConnections
- ✔️
dropIndexes
(collection procs:dropIndex
,dropIndexes
) - 🔳
filemd5
- 🔳
fsync
- 🔳
fsyncUnlock
- 🔳
getParameter
- ✔️
getDefaultReadConcern
(db procs:getDefaultReadConcern
) - ✔️
killCursors
(db procs:killCursors
) - ✔️
killOp
(db procs:killOp
) - ✔️
listCollections
(db procs:listCollections
) - ✔️
listDatabases
(db procs:listDatabases
- ✔️
listIndexes
(collection proc:listIndexes
) - 🔳
logRotate
- 🔳
reIndex
- ✔️
renameCollection
(db procs:renameCollection
) - ✔️
setDefaultRWConcern
(db procs:setDefaultRWConcern
) - 🔳
setFeatureCompabilityVersion
- 🔳
setIndexCommitQuorum
- 🔳
setParameter
- ✔️
shutdown
(db procs:shutdown
)
✅ Diagnostic commands 17/17 (26) Mongo module Anonimongo module
- 🔳
availableQueryOptions
(internal command) - ✔️
buildInfo
(db procs:buildInfo
) - ✔️
collStats
(db procs:collStats
) - ✔️
connPoolStats
(db procs:connPoolStats
) - ✔️
connectionStatus
*(db procs:connectionStatus
) - 🔳
cursorInfo
(removed, use metrics.cursor fromserverStatus
instead) - ✔️
dataSize
(db procs:dataSize
) - ✔️
dbHash
(db procs:dbHash
) - ✔️
dbStats
(db procs:dbStats
) - 🔳
diagLogging
(removed, on Mongo 3.6, use mongoreplay instead) - 🔳
driverOIDTest
(internal command) - ✔️
explain
(db procs:explain
) - 🔳
features
(internal command) - ✔️
getCmdLineOpts
(db procs:getCmdLineOpts
) - ✔️
getLog
(db procs:getLog
) - ✔️
hostInfo
(db procs:hostInfo
) - 🔳
isSelf
(internal command) - ✔️
listCommands
(db procs:listCommands
) - 🔳
netstat
(internal command) - ✔️
ping
(db procs:ping
) - ✔️
profile
(internal command) (db procs:profile
) - ✔️
serverStatus
(db procs:serverStat
) - ✔️
shardConnPoolStats
(db procs:shardConnPoolStats
) - ✔️
top
(db procs:top
) - ✔️
validate
(db procs:validate
) - 🔳
whatsmyuri
(internal command)
✅ Free monitoring commands 2/2 Mongo doc Anonimongo module
- ✔️
getFreeMonitoringStatus
(db procs:getFreeMonitoringStatus
) - ✔️
setFreeMonitoring
(db procs:setFreeMonitoring
)
❌ Auditing commands 0/1, only available for Mongodb Enterprise and AtlasDB Mongo doc Anonimongo module
- 🔳
logApplicationMessage
There are several points needed to keep in mind. Those are:
diagnostic.explain
and its correspondingexplain
-ed version of various commands haven't been undergone extensive testing.Query
only provided fordb.find
commands. It's still not supporting Query Plan Cache or anything regarded that.- All
readPreference
options are supported exceptnearest
. - Some third-party library which targeting OpenSSL <= 1.0 results in unstable behaviour. See issue #7 comment
- All internal connection implementations are Asynchronous IO. No support for multi-threading.
retryableWrites
is doing operation twice in case the first attempt is failed. The mongo reference of it can be found here. It's hard to test intentionally fail hence it hasn't been undergone extensive testing. As it's almost no different with normal operation, user can retry by themselves to increase of the retrying. Bulk write is reusing the previous mentioned operations so it's supported too.
If this library is useful for you and you can spare some, any donation would be appreciated. Paypal.
If you want to have particular features to be implemented and would like to have a commercial support, you can email to rahmatullah21@proton.me for reaching out.
MIT