This describes a task, called 'hello', which has two parameters (`String pattern` and `File in`). A `task` definition is a way of **encapsulating a UNIX command and environment and presenting them as functions**. Tasks have both inputs and outputs. Inputs are declared as declarations at the top of the `task` definition, while outputs are defined in the `output` section.
The user must provide a value for these two parameters in order for this task to be runnable. Implementations of WDL should accept their [inputs as JSON format](#specifying-workflow-inputs-in-json). For example, the above task needs values for two parameters: `String pattern` and `File in`:
Types can also have a `$type_postfix_quantifier` (either `?` or `+`):
244
245
*`?` means that the value is optional. Any expressions that fail to evaluate because this value is missing will evaluate to the empty string.
246
*`+` can only be applied to `Array` types, and it signifies that the array is required to have one or more values in it
247
248
For more details on the `$type_postfix_quantifier`, see the section on [Optional Parameters & Type Constraints](#optional-parameters--type-constraints)
249
250
For more information on type and how they are used to construct commands and define outputs of tasks, see the [Data Types & Serialization](#data-types--serialization) section.
The following fully-qualified names would exist within `workflow wf` in main.wdl:
306
307
*`wf` - References top-level workflow
308
*`wf.test` - References the first call to task `test`
309
*`wf.test2` - References the second call to task `test` (aliased as test2)
310
*`wf.test.my_var` - References the `String` input of first call to task `test`
311
*`wf.test.results` - References the `File` output of first call to task `test`
312
*`wf.test2.my_var` - References the `String` input of second call to task `test`
313
*`wf.test2.results` - References the `File` output of second call to task `test`
314
*`wf.foobar.results` - References the `File` output of the call to `other.foobar`
315
*`wf.foobar.input` - References the `File` input of the call to `other.foobar`
316
*`wf.arr` - References the `Array[String]` declaration on the workflow
317
*`wf.scattered_test` - References the scattered version of `call test`
318
*`wf.scattered_test.my_var` - References an `Array[String]` for each element used as `my_var` when running the scattered version of `call test`.
319
*`wf.scattered_test.results` - References an `Array[File]` which are the accumulated results from scattering `call test`
320
*`wf.scattered_test.1.results` - References an `File` from the second invocation (0-indexed) of `call test` within the scatter block. This particular invocation used value "b" for `my_var`
321
322
A namespaced identifier has the same syntax as a fully-qualified name. It is interpreted as the left-hand side being the name of a namespace and then the right-hand side being the name of a workflow, task, or namespace within that namespace. Consider this workflow:
Here, `ns.ns2.task` is a namespace identifier (see the [Call Statement](#call-statement) section for more details). Namespace identifiers, like fully-qualified names are left-associative, which means `ns.ns2.task` is interpreted as `((ns.ns2).task)`, which means `ns.ns2` would have to resolve to a namespace so that `.task` could be applied. If `ns2` was a task definition within `ns`, then this namespaced identifier would be invalid.
`strs` in this case would not be defined until both `call test as x` and `call test as y` have successfully completed. Before that's the case, `strs` is undefined. If any of the two tasks fail, then evaluation of `strs` should return an error to indicate that the `call test2 as z` operation should be skipped.
The syntax `x.y` refers to member access. `x` must be an object or task in a workflow. A Task can be thought of as an object where the attributes are the outputs of the task.
The syntax `x[y]` is for indexing maps and arrays. If `x` is an array, then `y` must evaluate to an integer. If `x` is a map, then `y` must evaluate to a key in that map.
The import statement specifies that `$string` which is to be interpted as a URI which points to a WDL file. The engine is responsible for resolving the URI and downloading the contents. The contents of the document in each URI must be WDL source code.
If a namespace identifier (via the `as $identifer` syntax) is specified, then all the tasks and workflows imported will only be accessible through that [namespace](#namespaces). If no namespace identifier is specified, then all tasks and workflows from the URI are imported into the current namespace.
A task is a declarative construct with a focus on constructing a command from a template. The command specification is interpreted in an engine specific way, though a typical case is that a command is a UNIX command line which would be run in a Docker image.
613
614
Tasks also define their outputs, which is essential for building dependencies between tasks. Any other data specified in the task definition (e.g. runtime information and meta-data) is optional.
A command is a *task section* that starts with the keyword 'command', and is enclosed in curly braces or `<<<``>>>`. The body of the command specifies the literal command line to run with placeholders (`$command_part_var`) for the parts of the command line that needs to be filled in.
The parser should read characters from the command line until it reaches a `${` character sequence. This is interpreted as a literal string (`$command_part_string`).
650
651
The parser should interpret any variable enclosed in `${`...`}` as a `$command_part_var`.
In this case `flags` within the `${`...`}` is an expression. The `$expression` can also be more complex, like a function call: `write_lines(some_array_value)`
>**NOTE**: the `$expression` in this context can only evaluate to a primitive type (e.g. not `Array`, `Map`, or `Object`). The only exception to this rule is when `sep` is specified as one of the `$var_option` fields
For example, `${true='--enable-foo', false='--disable-foo' Boolean yes_or_no}` would evaluate to either `--enable-foo` or `--disable-foo` based on the value of yes_or_no.
If either value is left out, then it's equivalent to specifying the empty string. If the parameter is `${true='--enable-foo' Boolean yes_or_no}`, and a value of false is specified for this parameter, then the parameter will evaluate to the empty string.
> 2. If 'default' is specified, the `$type_postfix_quantifier` for the variable's type MUST be `?`
734
735
####Alternative heredoc syntax
736
737
Sometimes a command is sufficiently long enough or might use `{` characters that using a different set of delimiters would make it more clear. In this case, enclose the command in `<<<`...`>>>`, as follows:
Any text inside of the `command` section, after instantiated, should have all *common leading whitespace* removed. In the `task heredoc` example in the previous section, if the user specifies a value of `/path/to/file` as the value for `File in`, then the command should be:
The 2-spaces that were common to each line were removed.
770
771
If the user mixes tabs and spaces, the behavior is undefined. A warning is suggested, and perhaps a convention of 4 spaces per tab. Other implementations might return an error in this case.
Then the task is expecting a file called "threshold.txt" in the current working directory where the task was executed. Inside of that file must be one line that contains only an integer and whitespace. See the [Data Types & Serialization](#data-types--serialization) section for more details.
The filename strings may also contain variable definitions themselves (see the [String Interpolation](#string-interpolation) section below for more details):
As with inputs, the outputs can reference previous outputs in the same block. The only requirement is that the output being referenced must be specified *before* the output which uses it.
Within tasks, any string literal can use string interpolation to access the value of any of the task's inputs. The most obvious example of this is being able to define an output file which is named as function of its input. For example:
Any `${identifier}` inside of a string literal must be replaced with the value of the identifier. If prefix were specified as `foobar`, then `"${prefix}.out"` would be evaluated to `"foobar.out"`.
The runtime section defines key/value pairs for runtime information needed for this task. Individual backends will define which keys they will inspect so a key/value pair may or may not actually be honored depending on how the task is run.
Values can be any expression and it is up to the engine to reject keys and/or values that do not make sense in that context. For example, consider the following WDL:
The value for the `docker` runtime attribute in this case is an array of values. The parser should accept this. Some engines might interpret it as an "either this image or that image" or could reject it outright.
869
870
Since values are expressions, they can also reference variables in the task:
871
872
```wdl
873
task test {
874
String ubuntu_version
875
876
command {
877
python script.py
878
}
879
runtime {
880
docker: "ubuntu:" + ubuntu_version
881
}
882
}
883
```
884
885
Most key/value pairs are arbitrary. However, the following keys have recommended conventions:
Location of a Docker image for which this task ought to be run. This can have a format like `ubuntu:latest` or `broadinstitute/scala-baseimage` in which case it should be interpreted as an image on DockerHub (i.e. it is valid to use in a `docker pull` command).
This purely optional section contains key/value pairs where the keys are names of parameters and the values are string descriptions for those parameters.
929
930
>*Additional requirement*: Any key in this section MUST correspond to a parameter in the command line
931
932
###Metadata Section
933
934
```
935
$meta = 'meta' $ws* '{' ($ws* $meta_kv $ws*)* '}'
936
$meta_kv = $identifier $ws* '=' $ws* $string
937
```
938
939
This purely optional section contains key/value pairs for any additional meta data that should be stored with the task. For example, perhaps author or contact email.
Notable pieces in this example is `${sep=',' min_std_max_min+}` which specifies that min_std_max_min can be one or more integers (the `+` after the variable name indicates that it can be one or more). If an `Array[Int]` is passed into this parameter, then it's flattened by combining the elements with the separator character (`sep=','`).
In this example, it's all pretty boilerplate, declarative code, except for some language-y like features, like `firstline(stdout)` and `append(list_of_count, wc2-tool.count)`. These both can be implemented fairly easily if we allow for custom function definitions. Parsing them is no problem. Implementation would be fairly simple and new functions would not be hard to add. Alternatively, this could be something like JavaScript or Python snippets that we run.
For this particular case where the command line is *itself* a mini DSL, The best option at that point is to allow the user to type in the rest of the command line, which is what `${sep=' ' stages+}` is for. This allows the user to specify an array of strings as the value for `stages` and then it concatenates them together with a space character
A workflow may call other tasks/workflows via the `call` keyword. The `$namespaced_identifier` is the reference to which task to run. Most commonly, it's simply the name of a task (see examples below), but it can also use `.` as a namespace resolver.
1126
1127
See the section on [Fully Qualified Names & Namespaced Identifiers](#fully-qualified-names--namespaced-identifiers) for details about how the `$namespaced_identifier` ought to be interpreted
All `call` statements must be uniquely identifiable. By default, the call's unique identifier is the task name (e.g. `call foo` would be referenced by name `foo`). However, if one were to `call foo` twice in a workflow, each subsequent `call` statement will need to alias itself to a unique name using the `as` clause: `call foo as bar`.
A `call` statement may reference a workflow too (e.g. `call other_workflow`). In this case, the `$inputs` section specifies a subset of the workflow's inputs and must specify fully qualified names.
The `$call_body` is optional and is meant to specify how to satisfy a subset of the the task or workflow's input parameters as well as a way to map tasks outputs to variables defined in the [visible scopes](#scope).
A `$variable_mapping` in the `$inputs` section maps parameters in the task to expressions. These expressions usually reference outputs of other tasks, but they can be arbitrary expressions.
A "scatter" clause defines that everything in the body (`$scatter_body`) can be run in parallel. The clause in parentheses (`$scatter_iteration_statement`) declares which collection to scatter over and what to call each element.
The `$scatter_iteration_statement` has two parts: the "item" and the "collection". For example, `scatter(x in y)` would define `x` as the item, and `y` as the collection. The item is always an identifier, while the collection is an expression that MUST evaluate to an `Array` type. The item will represent each item in that expression. For example, if `y` evaluated to an `Array[String]` then `x` would be a `String`.
The `$scatter_body` defines a set of scopes that will execute in the context of this scatter block.
1190
1191
For example, if `$expression` is an array of integers of size 3, then the body of the scatter clause can be executed 3-times in parallel. `$identifier` would refer to each integer in the array.
In this example, `task2` depends on `task1`. Variable `i` has an implicit `index` attribute to make sure we can access the right output from `task1`. Since both task1 and task2 run N times where N is the length of the array `integers`, any scalar outputs of these tasks is now an array.
Loops are distinct from scatter clauses because the body of a while loop needs to be executed to completion before another iteration is considered for iteration. The `$expression` condition is evaluated only when the iteration count is zero or if all `$workflow_element`s in the body have completed successfully for the current iteration.
Each `workflow` definition can specify an optional `output` section. This section lists outputs from individual `call`s that you also want to expose as outputs to the `workflow` itself. Replacing call output names with a `*` acts as a match-all wildcard.
Import statements can be used to pull in tasks/workflows from other locations as well as create namespaces. In the simplest case, an import statement adds the tasks/workflows that are imported into the current namespace. For example:
1262
1263
tasks.wdl
1264
```
1265
task x {
1266
command { python script.py }
1267
}
1268
task y {
1269
command { python script2.py }
1270
}
1271
```
1272
1273
workflow.wdl
1274
```
1275
import "tasks.wdl"
1276
1277
workflow wf {
1278
call x
1279
call y
1280
}
1281
```
1282
1283
Tasks `x` and `y` are in the same namespace as workflow `wf` is. However, if workflow.wdl could put all of those tasks behind a namespace:
1284
1285
workflow.wdl
1286
```
1287
import "tasks.wdl" as ns
1288
1289
workflow wf {
1290
call ns.x
1291
call ns.y
1292
}
1293
```
1294
1295
Now everything inside of `tasks.wdl` must be accessed through the namespace `ns`.
1296
1297
Each namespace contains: namespaces, tasks, and workflows. The names of these needs to be unique within that namespace. For example, there cannot be a task named `foo` and also a namespace named `foo`. Also there can't be a task and a workflow with the same names, or two workflows with the same name.
Inside of any scope, variables may be [declared](#declarations). The variables declared in that scope are visible to any sub-scope, recursively. For example:
`my_task` will use `x=4` to set the value for `var` in its command line. However, `my_task` also needs a value for `x` which is defined at the task level. Since `my_task` has two inputs (`x` and `var`), and only one of those is set in the `call my_task` declaration, the value for `my_task.x` still needs to be provided by the user when the workflow is run.
[Types](#types) can be optionally suffixed with a `?` or `+` in certain cases.
1338
1339
*`?` means that the parameter is optional. A user does not need to specify a value for the parameter in order to satisfy all the inputs to the workflow.
1340
*`+` applies only to `Array` types and it represents a constraint that the `Array` value must containe one-or-more elements.
Since `val` is optional, this command line can be instantiated in two ways:
1413
1414
```
1415
python script.py --val=foobar
1416
```
1417
1418
Or
1419
1420
```
1421
python script.py --val=
1422
```
1423
1424
The latter case is very likely an error case, and this `--val=` part should be left off if a value for `val` is omitted. To solve this problem, modify the expression inside the template tag as follows:
1425
1426
```
1427
python script.py ${"--val=" + val}
1428
```
1429
1430
#Scatter / Gather
1431
1432
The `scatter` block is meant to parallelize a series of identical tasks but give them slightly different inputs. The simplest example is:
Running this workflow (which needs no inputs), would yield a value of `[2,3,4,5,6]` for `wf.inc`. While `task inc` itself returns an `Int`, when it is called inside a scatter block, that type becomes an `Array[Int]`.
1456
1457
Any task that's downstream from the call to `inc` and outside the scatter block must accept an `Array[Int]`:
This workflow will output a value of `20` for `wf.sum.sum`. This works because `call inc` will output an `Array[Int]` because it is in the scatter block.
1495
1496
However, from inside the scope of the scatter block, the output of `call inc` is still an `Int`. So the following is valid:
In this example, `inc` and `inc2` are being called in serial where the output of one is fed to another. inc2 would output the array `[3,4,5,6,7]`
1510
1511
#Variable Resolution
1512
1513
Inside of [expressions](#expressions), variables are resolved differently depending on if the expression is in a `task` declaration or a `workflow` declaration
1514
1515
##Task-Level Resolution
1516
1517
Inside a task, resolution is trivial: The variable referenced MUST be a [declaration](#declarations) of the task. For example:
Inside of this task, there exists only one expression: `write_lines(strings)`. In here, when the expression evaluator tries to resolve `strings`, which must be a declaration of the task (in this case it is).
1529
1530
##Workflow-Level Resolution
1531
1532
In a workflow, resolution works by traversing the scope heirarchy starting from expression that references the variable.
In this example, there are two expressions: `s+"-suffix"` and `t+"-suffix"`. `s` is resolved as `"my_task_s"` and `t` is resolved as `"t"`.
1546
1547
#Computing Inputs
1548
1549
Both tasks and workflows have a typed inputs that must be satisfied in order to run. The following sections describe how to compute inputs for `task` and `workflow` declarations
1550
1551
##Task Inputs
1552
1553
Tasks define all their outputs as declarations at the top of the task definition.
In this example, `s`, `i`, and `f` are inputs to this task. Even though the command line does not reference `${s}`. Implementations of WDL engines may display a warning or report an error in this case, since `s` isn't used.
1568
1569
##Workflow Inputs
1570
1571
Workflows have declarations, like tasks, but a workflow must also account for all calls to sub-tasks when determining inputs.
1572
1573
Workflows also return their inputs as fully qualified names. Tasks only return the names of the variables as inputs (as they're guaranteed to be unique within a task). However, since workflows can call the same task twice, names might collide. The general algorithm for computing inputs going something like this:
1574
1575
* Take all inputs to all `call` statements in the workflow
1576
* Subtract out all inputs that are satisfied through the `input: ` section
1577
* Add in all declarations which don't have a static value defined
Once workflow inputs are computed (see previous section), the value for each of the fully-qualified names needs to be specified per invocation of the workflow. Workflow inputs are specified in JSON or YAML format. In JSON, the inputs to the workflow in the previous section can be:
1650
1651
```
1652
{
1653
"wf.t1.s": "some_string",
1654
"wf.t2.s": "some_string",
1655
"wf.int_val": 3,
1656
"wf.my_ints": [5,6,7,8],
1657
"wf.ref_file": "/path/to/file.txt"
1658
}
1659
```
1660
1661
It's important to note that the type in JSON must be coercable to the WDL type. For example `wf.int_val` expects an integer, but if we specified it in JSON as `"wf.int_val": "3"`, this coercion from string to integer is not valid and would result in a type error. See the section on [Type Coercion](#type-coercion) for more details.
WDL values can be created from either JSON values or from native language values. The below table references String-like, Integer-like, etc to refer to values in a particular programming language. For example, "String-like" could mean a `java.io.String` in the Java context or a `str` in Python. An "Array-like" could refer to a `Seq` in Scala or a `list` in Python.
1666
1667
|WDL Type |Can Accept |Notes / Constraints|
1668
|---------|-------------|-------------------|
1669
|`String` |JSON String||
1670
| |String-like||
1671
| |`String`|Identity coercion|
1672
| |`File`||
1673
|`File` |JSON String|Interpreted as a file path|
1674
| |String-like|Interpreted as file path|
1675
| |`String`|Interpreted as file path|
1676
| |`File`|Identity Coercion|
1677
|`Int` |JSON Number|Use floor of the value for non-integers|
1678
| |Integer-like||
1679
| |`Int`|Identity coercion|
1680
|`Float` |JSON Number||
1681
| |Float-like||
1682
| |`Float`|Identity coercion|
1683
|`Boolean`|JSON Boolean||
1684
| |Boolean-like||
1685
| |`Boolean`|Identity coercion|
1686
|`Array[T]`|JSON Array|Elements must be coercable to `T`|
1687
| |Array-like|Elements must be coercable to `T`|
1688
|`Map[K, V]`|JSON Object|keys and values must be coercable to `K` and `V`, respectively|
1689
| |Map-like|keys and values must be coercable to `K` and `V`, respectively|
Given a file-like object (`String`, `File`) as a parameter, this will read each line as a string and return an `Array[String]` representation of the lines in the file.
the `read_tsv()` function takes one parameter, which is a file-like object (`String`, `File`) and returns an `Array[Array[String]]` representing the table from the TSV file.
Given a file-like object (`String`, `File`) as a parameter, this will read each line from a file and expect the line to have the format `col1\tcol2`. In other words, the file-like object must be a two-column TSV file.
the `read_json()` function takes one parameter, which is a file-like object (`String`, `File`) and returns a data type which matches the data structure in the JSON file. The mapping of JSON type to WDL type is:
The `read_float()` function takes a file path which is expected to contain 1 line with 1 floating point number on it. This function returns that float.
The `read_boolean()` function takes a file path which is expected to contain 1 line with 1 Boolean value (either "true" or "false" on it). This function returns that Boolean value.
1889
1890
##File write_lines(Array[String])
1891
1892
Given something that's compatible with `Array[String]`, this writes each element to it's own line on a file. with newline `\n` characters as line separators.
Tasks and workflows are given values for their input parameters in order to run. The type of each of those input parameters are declarations on the `task` or `workflow`. Those input parameters can be any [valid type](#types):
When a WDL workflow engine instantiates a command specified in the `command` section of a `task`, it must serialize all `${...}` tags in the command into primitive types.
On the other end, tasks need to be able to communicate data structures back to the workflow engine. For example, let's say this same tool that takes a list of FASTQs wants to return back a `Map[File, Int]` representing the number of reads in each FASTQ. A tool might choose to output it as a two-column TSV or as a JSON object and WDL needs to know how to convert that to the proper data type.
Here, the expression `read_lines(stdout())` says "take the output from stdout, break into lines, and return that result as an Array[String]". See the definition of [read_lines](#arraystring-read_linesstringfile) and [stdout](#file-stdout) for more details.
Compound types, like `Array` and `Map` must be converted to a primitive type before it can be used in the command. There are many ways to turn a compound types into primitive types, as laid out in following sections
The array flattening approach can be done if a parameter is specified as `${sep=' ' my_param}`. `my_param` must be declared as an `Array` of primitive types. When the value of `my_param` is specified, then the values are joined together with the separator character (a space in this case). For example:
The map type can be serialized as a two-column TSV file and the parameter on the command line is given the path to that file, using the `write_map()` function:
The map type can also be serialized as a JSON file and the parameter on the command line is given the path to that file, using the `write_json()` function:
An object is a more general case of a map where the keys are strings and the values are of arbitrary types and treated as strings. Objects can be serialized with either `write_object()` or `write_json()` functions:
`Array[Object]` must guarantee that all objects in the array have the same set of attributes. These can be serialized with either `write_objects()` or `write_json()` functions, as described in following sections.
2420
2421
#####Array[Object] serialization using write_objects()
2422
2423
an `Array[Object]` can be serialized using `write_objects()` into a TSV file:
Both files `file_with_int` and `file_with_uri` should contain one line with the value on that line. This value is then validated against the type of the variable. If `file_with_int` contains a line with the text "foobar", the workflow must fail this task with an error.
2537
2538
###Compound Types
2539
2540
Tasks can also output to a file or stdout/stderr an `Array`, `Map`, or `Object` data structure in a two major formats:
2541
2542
* JSON - because it fits naturally with the types within WDL
2543
* Text based / TSV - These are usually simple table and text-based encodings (e.g. `Array[String]` could be serialized by having each element be a line in a file)
2544
2545
####Array deserialization
2546
2547
Maps are deserialized from:
2548
2549
* Files that contain a JSON Array as their top-level element.
2550
* Any file where it is desirable to interpret each line as an element of the `Array`.
2551
2552
#####Array deserialization using read_lines()
2553
2554
`read_lines()` will return an `Array[String]` where each element in the array is a line in the file.
2555
2556
This return value can be auto converted to other `Array` types. For example:
This task would assign the array with elements `"foo"` and `"bar"` to `my_array`.
2591
2592
If the echo statement was instead `echo '{"foo": "bar"}'`, the engine MUST fail the task for a type mismatch.
2593
2594
####Map deserialization
2595
2596
Maps are deserialized from:
2597
2598
* Files that contain a JSON Object as their top-level element.
2599
* Files that contain a two-column TSV file.
2600
2601
#####Map deserialization using read_map()
2602
2603
`read_map()` will return an `Map[String, String]` where the keys are the first column in the TSV input file and the corresponding values are the second column.
2604
2605
This return value can be auto converted to other `Map` types. For example:
This would put a map containing three keys (`key_0`, `key_1`, and `key_2`) and three respective values (`0`, `1`, and `2`) as the value of `my_ints`
2622
2623
#####Map deserialization using read_json()
2624
2625
`read_json()` will return whatever data type resides in that JSON file. If that file contains a JSON object with homogeneous key/value pair types (e.g. `string -> int` pairs), then the `read_json()` function would return a `Map`.
This task would assign the one key-value pair map in the echo statement to `my_map`.
2639
2640
If the echo statement was instead `echo '["foo", "bar"]'`, the engine MUST fail the task for a type mismatch.
2641
2642
####Object deserialization
2643
2644
Objects are deserialized from files that contain a two-row, n-column TSV file. The first row are the object attribute names and the corresponding entries on the second row are the values.
2645
2646
#####Object deserialization using read_object()
2647
2648
`read_object()` will return an `Object` where the keys are the first row in the TSV input file and the corresponding values are the second row (corresponding column).
print('\t'.join(["key_{}".format(i) for i in range(3)]))
2655
print('\t'.join(["value_{}".format(i) for i in range(3)]))
2656
CODE
2657
>>>
2658
output {
2659
Object my_obj = read_object(stdout())
2660
}
2661
}
2662
```
2663
2664
This would put an object containing three attributes (`key_0`, `key_1`, and `key_2`) and three respective values (`value_0`, `value_1`, and `value_2`) as the value of `my_obj`
2665
2666
####Array[Object] deserialization
2667
2668
`Array[Object]` MUST assume that all objects in the array are homogeneous (they have the same attributes, but the attributes don't have to have the same values)
2669
2670
An `Array[Object]` is deserialized from files that contains at least 2 rows and a uniform n-column TSV file. The first row are the object attribute names and the corresponding entries on the subsequent rows are the values
`read_object()` will return an `Object` where the keys are the first row in the TSV input file and the corresponding values are the second row (corresponding column).
This would create an array of **three identical**`Object`s containing three attributes (`key_0`, `key_1`, and `key_2`) and three respective values (`value_0`, `value_1`, and `value_2`) as the value of `my_obj`
You can't perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.