diff --git a/index.html b/index.html index 082e23e3..b2b50803 100644 --- a/index.html +++ b/index.html @@ -878,7 +878,7 @@
+ The expected data include 2D and 3D streams produced by digital microscopes and recordings thereof. These + streams + may contain metadata which describe the instantaneous magnifications and timescales of data. + The expected data also include the output streams produced by services. These streams could, for instance, + contain + annotation data. +
++ With respect to annotating video streams, one could make use of secondary video tracks with + uniquely-identified + bounding boxes or more intricate silhouettes defining spatial regions on which to attach semantic data, e.g., + metadata or annotations, using yet other secondary tracks. Similar approaches could work for point-cloud-based + and + mesh-based animations. +
++ Mixed-reality collaborative spaces enable users to visualize and interact with data and to work together from + multiple locations on shared tasks and projects. +
++ Digital microscopes could be accessed and utilized from mixed-reality collaborative spaces via WoT + architecture and + standards. Digital microscopes could be thusly utilized throughout biomedicine, the sciences, and education. + Data from digital microscopes could be processed by services to produce outputs useful to users. Users could + select + and configure one or more such services and route streaming data or recordings through them to consume + resultant + data in a mixed-reality collaborative space. Graphs, or networks, of such services could be created by users. + Services could also communicate back to digital microscopes to control their mechanisms and settings. Services + which + simultaneously process digital microscope data and communicate back to control such devices could be of use + for + providing users with automatic focusing, magnification, and tracking. +
+ + Multimodal user interfaces could be dynamically generated for digital microscope content by making use of the + output + data provided by computer-vision-related services. Such dynamic multimodal user interfaces could provide users + with + the means of pointing and using spoken natural language to indicate precisely which contents that they wish to + focus + on, magnify, or track. + ++ For example, a digital microscope could be magnifying and streaming 2D or 3D imagery of a living animal cell. + This + data could be processed by a service which provides computer-vision-related annotations, labeling parts of the + cell: + the cell nucleus, Golgi apparatus, ribosomes, the endoplasmic reticulum, mitochondria, and so forth. The + resultant + visual content with its algorithmically-generated annotations could then be interacted with by users. Users + could + point and use spoken natural language to indicate precisely which parts of the living animal cell that they + wished + for the digital microscope to focus on, magnify, or track. +
++ Requirements that are not addressed in the current WoT standards or building blocks include streaming + protocols and + formats for 3D digital microscope data and recordings. While digital microscopes could stream video using a + variety + of existing protocols and formats, the streaming of other forms of 3D data and animations, e.g., point clouds + and + meshes, could be facilitated by recommendation. +
++ Users could select and configure one or more services and route data streaming from digital microscopes + through them + to consume the resultant data in a mixed-reality collaborative space. Additionally, services could be designed + to + communicate back to and control the mechanisms and settings of digital microscopes. + Requirements that are not addressed in the current WoT standards or building blocks include a means of + interconnecting services. Perhaps services could utilize WoT architecture and could be described as WoT + things, or + virtual devices, which provide functionality including that with which to establish data connectivity between + them. +
+