Skip to content
This repository has been archived by the owner on Mar 30, 2022. It is now read-only.

Commit

Permalink
Merge pull request #28 from annjose/fix-urls
Browse files Browse the repository at this point in the history
Fix URLs of Swift files that were pointing to the forked repo
  • Loading branch information
dan-zheng committed May 20, 2018
2 parents cee18ca + 30b91e4 commit 9f12351
Show file tree
Hide file tree
Showing 5 changed files with 18 additions and 18 deletions.
2 changes: 1 addition & 1 deletion Installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

To install Swift for TensorFlow, download one of the packages below and follow the instructions for your operating system. After installation, you can use the full suite of Swift tools, including `swift` (Swift REPL/interpreter) and `swiftc` (Swift compiler). See [here](Usage.md) for more details about using Swift for TensorFlow.

**Note:** If you want to modify the Swift for TensorFlow source code or build with a custom version of TensorFlow, see [here](https://github.com/google/swift/blob/tensorflow/README.md) for instructions on building from source.
**Note:** If you want to modify the Swift for TensorFlow source code or build with a custom version of TensorFlow, see [here](https://github.com/apple/swift/blob/tensorflow/README.md) for instructions on building from source.

**Note:** Swift for TensorFlow is an early stage research project. It has been released to enable open source development and is not yet ready for general use by machine learning developers.

Expand Down
2 changes: 1 addition & 1 deletion Usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ This document explains basic usage of Swift for TensorFlow, including:
* How to use the Swift interpreter and compiler
* How to use Swift for TensorFlow with Xcode (Mac only)

You must have a working toolchain for Swift for TensorFlow (`swift`, `swiftc`, etc) before proceeding with these instructions. If not, please [install Swift for TensorFlow](Installation.md) or [build from source](https://github.com/google/swift/blob/tensorflow/README.md) before proceeding.
You must have a working toolchain for Swift for TensorFlow (`swift`, `swiftc`, etc) before proceeding with these instructions. If not, please [install Swift for TensorFlow](Installation.md) or [build from source](https://github.com/apple/swift/blob/tensorflow/README.md) before proceeding.

To see example models written using Swift for TensorFlow, go to [tensorflow/swift-models](https://github.com/tensorflow/swift-models).

Expand Down
2 changes: 1 addition & 1 deletion docs/AutomaticDifferentiation.md
Original file line number Diff line number Diff line change
Expand Up @@ -403,7 +403,7 @@ Automatic differentiation in Swift is a compiler transform implemented as a
static analysis. AD benefits from being implemented on a functional IR like SSA
form, so our implementation is a transformation on the Swift Intermediate
Language. [The differentiation
pass](https://github.com/google/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFDifferentiation.cpp)
pass](https://github.com/apple/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFDifferentiation.cpp)
is part of the mandatory lowering pass pipeline, and is run before [Graph
Program Extraction](GraphProgramExtraction.md).

Expand Down
28 changes: 14 additions & 14 deletions docs/DesignOverview.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ To understand how this works, it is important to know how TensorFlow represents

Swift for TensorFlow has a low-level syntax that gives you direct access to any op, using a distinct `#tfop` syntax (this syntax is a placeholder that is likely to be revised).
For example, here are a few methods defined on the Tensor type (simplified slightly for presentation),
you can see their full definition in [Ops.swift](https://github.com/google/swift/blob/tensorflow/stdlib/public/TensorFlow/Ops.swift).
you can see their full definition in [Ops.swift](https://github.com/apple/swift/blob/tensorflow/stdlib/public/TensorFlow/Ops.swift).

```swift
struct Tensor<Scalar> {
Expand Down Expand Up @@ -82,19 +82,19 @@ etc) that connect tensor operations through a process called "deabstraction".
After deabstraction, the tensor operations are directly connected to each other
through SSA dataflow edges and are embedded in a control flow graph represented
in the [Swift Intermediate Language](https://github.com/apple/swift/blob/master/docs/SIL.rst) (SIL).
The code for this is primarily implemented in [TFDeabstraction.cpp](https://github.com/google/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFDeabstraction.cpp).
The code for this is primarily implemented in [TFDeabstraction.cpp](https://github.com/apple/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFDeabstraction.cpp).

Once the tensor operations are desugared, a transformation we call "partitioning" extracts the graph operations from the program and builds a new SIL function to represent the tensor code. In addition to removing the tensor operations from the host code, new calls are injected that call into [our new runtime library](#runtime-entry-points-for-extraction) to start up TensorFlow, rendezvous to collect any results, and send/receive values between the host and the tensor program as it runs. The bulk of the Graph Program Extraction transformation itself lives in [TFPartition.cpp](https://github.com/google/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFPartition.cpp).
Once the tensor operations are desugared, a transformation we call "partitioning" extracts the graph operations from the program and builds a new SIL function to represent the tensor code. In addition to removing the tensor operations from the host code, new calls are injected that call into [our new runtime library](#runtime-entry-points-for-extraction) to start up TensorFlow, rendezvous to collect any results, and send/receive values between the host and the tensor program as it runs. The bulk of the Graph Program Extraction transformation itself lives in [TFPartition.cpp](https://github.com/apple/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFPartition.cpp).

Once the tensor function is formed, it has some transformations applied to it, and is eventually emitted to a TensorFlow graph using the code in [TFLowerGraph.cpp](https://github.com/google/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFLowerGraph.cpp). After the TensorFlow graph is formed, we serialize it to a protobuf and encode the bits directly into the executable, making it easy to load at program runtime.
Once the tensor function is formed, it has some transformations applied to it, and is eventually emitted to a TensorFlow graph using the code in [TFLowerGraph.cpp](https://github.com/apple/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFLowerGraph.cpp). After the TensorFlow graph is formed, we serialize it to a protobuf and encode the bits directly into the executable, making it easy to load at program runtime.

We aren’t aware of any other system using this approach, but our implementation draws on a lot of related conceptual work, including [program slicing](https://en.wikipedia.org/wiki/Program_slicing), [abstract interpretation](https://en.wikipedia.org/wiki/Abstract_interpretation), and is implemented as a [static compiler analysis](https://en.wikipedia.org/wiki/Static_program_analysis). Please see our detailed [Graph Program Extraction whitepaper](GraphProgramExtraction.md) for more information on how all of this works.

Finally, while TensorFlow is the reason we built this infrastructure, its algorithms are independent of TensorFlow itself: the same compiler transformation can extract any computation that executes asynchronously from the host program while communicating through sends and receives. This is useful and can be applied to anything that represents computation as a graph, including other ML frameworks, other kinds of accelerators (for cryptography, graphics, transcoding, etc), and general distributed systems programming models based on graph abstractions. We are interested in exploring new applications of this algorithm in the future.

## The TensorFlow module

The TensorFlow module is the library of code you get as a result of `import TensorFlow` in a Swift program. It is written in Swift and lives in the [stdlib/public/TensorFlow](https://github.com/google/swift/tree/tensorflow/stdlib/public/TensorFlow) directory. It implements a few different things:
The TensorFlow module is the library of code you get as a result of `import TensorFlow` in a Swift program. It is written in Swift and lives in the [stdlib/public/TensorFlow](https://github.com/apple/swift/tree/tensorflow/stdlib/public/TensorFlow) directory. It implements a few different things:

### User APIs: Tensor, ShapedArray, etc.

Expand Down Expand Up @@ -158,18 +158,18 @@ let tensor2D = Tensor(matrix)
```

The implementation of `Tensor` builds on the `#tfop` magic syntax that builds TensorFlow graph nodes, and is defined in
[Tensor.swift](https://github.com/google/swift/blob/tensorflow/stdlib/public/TensorFlow/Tensor.swift),
[Ops.swift](https://github.com/google/swift/blob/tensorflow/stdlib/public/TensorFlow/Ops.swift),
[RankedTensor.swift.gyb](https://github.com/google/swift/blob/tensorflow/stdlib/public/TensorFlow/RankedTensor.swift.gyb),
and [TensorProtocol.swift](https://github.com/google/swift/blob/tensorflow/stdlib/public/TensorFlow/TensorProtocol.swift).
[Tensor.swift](https://github.com/apple/swift/blob/tensorflow/stdlib/public/TensorFlow/Tensor.swift),
[Ops.swift](https://github.com/apple/swift/blob/tensorflow/stdlib/public/TensorFlow/Ops.swift),
[RankedTensor.swift.gyb](https://github.com/apple/swift/blob/tensorflow/stdlib/public/TensorFlow/RankedTensor.swift.gyb),
and [TensorProtocol.swift](https://github.com/apple/swift/blob/tensorflow/stdlib/public/TensorFlow/TensorProtocol.swift).
The implementation of `ShapedArray` follows standard techniques used when implementing Swift collections and is defined primarily in
[ShapedArray.swift](https://github.com/google/swift/blob/tensorflow/stdlib/public/TensorFlow/ShapedArray.swift) and
[RankedArray.swift.gyb](https://github.com/google/swift/blob/tensorflow/stdlib/public/TensorFlow/RankedArray.swift.gyb).
[ShapedArray.swift](https://github.com/apple/swift/blob/tensorflow/stdlib/public/TensorFlow/ShapedArray.swift) and
[RankedArray.swift.gyb](https://github.com/apple/swift/blob/tensorflow/stdlib/public/TensorFlow/RankedArray.swift.gyb).
In addition to the `Tensor` family of types, we are experimenting with building abstractions on top of the TensorFlow graph nodes for data pipelines, resources, variants, and other things representable as graph nodes.

### Runtime Entry Points for Extraction

The [Graph Program Extraction algorithm](#graph-program-extraction) splits the tensor operations out to a TensorFlow graph which is serialized to a protobuf and encoded into the program’s executable. It rewrites the host code to insert calls to "start tensor program", "finish tensor program", and "terminate tensor program" runtime entry points, which are implemented in the [CompilerRuntime.swift](https://github.com/google/swift/blob/tensorflow/stdlib/public/TensorFlow/CompilerRuntime.swift) file in terms of TensorFlow APIs.
The [Graph Program Extraction algorithm](#graph-program-extraction) splits the tensor operations out to a TensorFlow graph which is serialized to a protobuf and encoded into the program’s executable. It rewrites the host code to insert calls to "start tensor program", "finish tensor program", and "terminate tensor program" runtime entry points, which are implemented in the [CompilerRuntime.swift](https://github.com/apple/swift/blob/tensorflow/stdlib/public/TensorFlow/CompilerRuntime.swift) file in terms of TensorFlow APIs.

Our runtime currently has several supported paths for driving TensorFlow, including paths that enable XLA, paths that go through classic executor, paths that uses the "eager execution" runtime entry points, and some specialized support for Cloud TPU configurations. This is still rapidly evolving and subject to continuous change.

Expand Down Expand Up @@ -241,9 +241,9 @@ print(images.shape) // (50000, 784) print(images.shape)

As you can see, the syntax here is very close: the major differences are that Swift requires values to be declared before use, and that we decided to put [Python builtin functions](https://docs.python.org/3/library/functions.html) like `import`, `type`, `slice`, etc under a `Python.` namespace (to avoid cluttering the global scope). This doesn’t require SWIG or any other wrappers, so it is super easy to use.

This feature is accomplished without making Python specific changes to the compiler or language - it is completely implemented in the [Python.swift file](https://github.com/google/swift/blob/tensorflow/stdlib/public/Python/Python.swift). This means that we can use the same techniques to directly integrate with other dynamic language runtimes (e.g. Javascript, Ruby, etc) if it becomes important in the future. Python support is also completely independent of the other TensorFlow and automatic differentiation logic we’re building in the rest of the project. It is a generally useful extension to the Swift ecosystem that can stand alone, useful for server side development or anything else that wants to interoperate with existing Python APIs.
This feature is accomplished without making Python specific changes to the compiler or language - it is completely implemented in the [Python.swift file](https://github.com/apple/swift/blob/tensorflow/stdlib/public/Python/Python.swift). This means that we can use the same techniques to directly integrate with other dynamic language runtimes (e.g. Javascript, Ruby, etc) if it becomes important in the future. Python support is also completely independent of the other TensorFlow and automatic differentiation logic we’re building in the rest of the project. It is a generally useful extension to the Swift ecosystem that can stand alone, useful for server side development or anything else that wants to interoperate with existing Python APIs.

To find out more about how this works, please check out the [Python Interoperability Deep Dive](PythonInteroperability.md), or browse the implementation in [Python.swift on GitHub](https://github.com/google/swift/blob/tensorflow/stdlib/public/Python/Python.swift).
To find out more about how this works, please check out the [Python Interoperability Deep Dive](PythonInteroperability.md), or browse the implementation in [Python.swift on GitHub](https://github.com/apple/swift/blob/tensorflow/stdlib/public/Python/Python.swift).

## Future Directions

Expand Down
2 changes: 1 addition & 1 deletion docs/PythonInteroperability.md
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ And of course, this integrates with all the normal mechanics provided by Swift e

## Current Implementation and Status

As mentioned above, our current implementation of the Python interoperability library is available on GitHub in the [Python.swift](https://github.com/google/swift/blob/tensorflow/stdlib/public/Python/Python.swift) file.
As mentioned above, our current implementation of the Python interoperability library is available on GitHub in the [Python.swift](https://github.com/apple/swift/blob/tensorflow/stdlib/public/Python/Python.swift) file.
In practice, we have found that it works nicely for many use cases. However, a few things that are missing that we need to continue developing and figure out:

We need to implement support for the [@dynamicCallable feature](https://gist.github.com/lattner/a6257f425f55fe39fd6ac7a2354d693d), improving the call-side syntax, just like we improved member lookup.
Expand Down

0 comments on commit 9f12351

Please sign in to comment.