Public repository for custom blocks for Omniscope Evo.
- Design your custom block in Omniscope Evo 2020.1 or later. The source code should be reasonably documented and potentially contain sections to describe input fields and parameters.
- Export as a ZIP file from the block dialog.
- Send the file to firstname.lastname@example.org and we will include it for you.
- Follow points 1-2 from the simple way.
- Fork the repository.
- Create or use a directory in the forked repository under one of the main sections that specifies the general area of what the block does.
- Extract the ZIP file into this directory.
- Consider adding a README.md for convenience, and a thumbnail.png.
- Run the python scripts create_index.py and create_readme.py located in the root of the repository.
- Create a pull request.
- Add row ID field
- Field Renamer
- URL Encode
- Unescape HTML
- Split Address
- Unstack Records
- XPT Reader
- Yahoo Finance
- Google BigQuery Import Table
- Google BigQuery Custom SQL
- Custom scripts
The ForEach multi stage block allows to orchestrate the execution of another Omniscope project and running the workflow multiple times, each time with a different set of parameter values. Unlike the ForEach block allows multiple stages of execution, executing/refreshing from source a different set of blocks in each stage.
Converts gridsquare / Maidenhead
Match regions in shapefile with geographical points having latitude and longitude
Intefaces with kedro workflows
Normalise semi-structured JSON strings into a flat table, appending data record by record.
Performs a join between the first (left) and second (right) input. The field on which the join is performed must be text containing multiple terms. The result will contain joined records based on how many terms they share, weighted by inverse document frequency.
Performs a join between values in the first input and intervals in the second input. Rows are joined if the value is contained in an interval.
Performs a join between the first (left) and second (right) input. The join can be performed using equality/inequality comparators ==, <=, >=, <, > , which means the result will be a constraint cartesian join including all records that match the inequalities.
Partitions the data into chunks of the desired size. There will be a new field called "Partition" which contains a number unique to each partition.
Keep all selected fixed fields in the output, de-pivot all other fields
Standardises the values in the selected fields so that they are in the range between 0 and 1. I.e. The new value of the highest value in each field is going to be 1, and the lowest value 0. All other values are scaled proportionally.
Executes another Omniscope project multiple times, each time with a different set of parameter values.
Adds a Row ID field with a sequential number.
Renames the fields of a data set given a list of current names and new names.
URL encode strings in a field using the UTF-8 encoding scheme
Convert all named and numeric character references to the corresponding Unicode characters
Splits an address field into streetname, streetnumber, and suffix.
Unstack all records by splitting on text fields with stacked values, filling records with empty strings where needed.
Performs GMM clustering on the first input data provided. The output consists of the original input with a Cluster field appended. If a second input is available, it will be used as output instead.
Performs DBScan clustering on the first input data provided. The output consists of the original input with a Cluster field appended. If a second input is available, it will be used as output instead.
Performs KMeans clustering on the first input data provided. The output consists of the original input with a Cluster field appended. If a second input is available, it will be used as output instead.
Given a dataset in which each record represents an edge between two nodes of a network, and each node has an associated categorical attribute, the block analyses connections between attributes, based on connections between associated nodes. The result of the analysis is a list of records in which each record specifies a connection from one attribute to another. The connection contains a probability field, which gives an answer to the question that if a node has the specified categorical attribute, how probable it is that it has a connection to another node with the linked categorical attribute.
Given a dataset in which each record represents an edge between two nodes of a network, the block will project all the nodes onto a (e.g. 2)- dimensional plane in such a way that nodes which share many connections are close together, and nodes that do not share many connections are far apart.
Performs k-nearest-neighbour prediction on the data. The prediction for a new point depends on the k-nearest-neighbours around the point. The majority class is used as the prediction.
Predicts classes of new data from old data by drawing a boundary between two classes whereas the margin around the bondary is made as large as possible to avoid touching the points.
Computes a confusion matrix as well as model validation statistics
Extracts the structure and content of a website and its pages.
Provides detailed statistics about a dataset
Computes an estimate of a survival curve for truncated and/or censored data using the Kaplan-Meier or Fleming-Harrington method
A connector for MongoDB
Reads multiple rds files either from an upstream block, or a folder, and appends them
Joins regions defined in a shapefile with points defined as latitudes and longitudes, and gives meta information about the content of the shapefile