Skip to content

Commit

Permalink
Merge main v4 (#5929)
Browse files Browse the repository at this point in the history
* Merge main v4

* js conflicts
  • Loading branch information
freddyaboulton committed Oct 16, 2023
1 parent 0cec4f8 commit 328df87
Show file tree
Hide file tree
Showing 261 changed files with 439 additions and 427 deletions.
5 changes: 5 additions & 0 deletions .changeset/all-crabs-doubt.md
@@ -0,0 +1,5 @@
---
"@gradio/wasm": patch
---

fix:Lite: Add a break statement
6 changes: 6 additions & 0 deletions .changeset/angry-states-battle.md
@@ -0,0 +1,6 @@
---
"@gradio/client": patch
"gradio": patch
---

fix:Ensure websocket polyfill doesn't load if there is already a `global.Webocket` property set
5 changes: 5 additions & 0 deletions .changeset/every-eggs-arrive.md
@@ -0,0 +1,5 @@
---
"gradio": minor
---

feat:Added dimensionality check to avoid bad array dimensions
5 changes: 5 additions & 0 deletions .changeset/green-forks-float.md
@@ -0,0 +1,5 @@
---
"gradio": patch
---

fix:Define Font.__repr__() to be printed in the doc in a readable format
5 changes: 5 additions & 0 deletions .changeset/new-ideas-sniff.md
@@ -0,0 +1,5 @@
---
"gradio": minor
---

feat:Fix curly brackets in docstrings
5 changes: 5 additions & 0 deletions .changeset/tall-tables-sing.md
@@ -0,0 +1,5 @@
---
"gradio": patch
---

fix:Remove deprecation warning from `gr.update` and clean up associated code
6 changes: 6 additions & 0 deletions .changeset/thirty-planets-smash.md
@@ -0,0 +1,6 @@
---
"@gradio/markdown": patch
"gradio": patch
---

fix:Fix Dataframe `line_breaks`
5 changes: 5 additions & 0 deletions .changeset/tricky-spoons-slide.md
@@ -0,0 +1,5 @@
---
"gradio": minor
---

feat:Fix type the docstring of the Code component
65 changes: 0 additions & 65 deletions CHANGELOG.md
Expand Up @@ -113,71 +113,6 @@ For more information check the [`FileExplorer` documentation](https://gradio.app
### Fixes

- [#5625](https://github.com/gradio-app/gradio/pull/5625) [`9ccc4794a`](https://github.com/gradio-app/gradio/commit/9ccc4794a72ce8319417119f6c370e7af3ffca6d) - Use ContextVar instead of threading.local(). Thanks [@cbensimon](https://github.com/cbensimon)!
- [#5636](https://github.com/gradio-app/gradio/pull/5636) [`fb5964fb8`](https://github.com/gradio-app/gradio/commit/fb5964fb88082e7b956853b543c468116811cab9) - Fix bug in example cache loading event. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!
- [#5633](https://github.com/gradio-app/gradio/pull/5633) [`341402337`](https://github.com/gradio-app/gradio/commit/34140233794c29d4722020e13c2d045da642dfae) - Allow Gradio apps containing `gr.Radio()`, `gr.Checkboxgroup()`, or `gr.Dropdown()` to be loaded with `gr.load()`. Thanks [@abidlabs](https://github.com/abidlabs)!
- [#5593](https://github.com/gradio-app/gradio/pull/5593) [`88d43bd12`](https://github.com/gradio-app/gradio/commit/88d43bd124792d216da445adef932a2b02f5f416) - Fixes avatar image in chatbot being squashed. Thanks [@dawoodkhan82](https://github.com/dawoodkhan82)!

## 3.45.0-beta.8

### Features

- [#5649](https://github.com/gradio-app/gradio/pull/5649) [`d56b355c1`](https://github.com/gradio-app/gradio/commit/d56b355c12ccdeeb8406a3520fecc15ae69d9141) - Fix front-end imports + other misc fixes. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!
- [#5651](https://github.com/gradio-app/gradio/pull/5651) [`0ab84bf80`](https://github.com/gradio-app/gradio/commit/0ab84bf80f66c866327473d08fe5bdc8d32f155a) - Add overwrite flag to create command. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!

## 3.45.0-beta.7

### Features

- [#5648](https://github.com/gradio-app/gradio/pull/5648) [`c573e2339`](https://github.com/gradio-app/gradio/commit/c573e2339b86c85b378dc349de5e9223a3c3b04a) - Publish all components to npm. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!
- [#5637](https://github.com/gradio-app/gradio/pull/5637) [`670cfb75b`](https://github.com/gradio-app/gradio/commit/670cfb75b7cfd5a25a22c5aa307cd29c8879889e) - Some minor v4 fixes. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!

## 3.45.0-beta.6

### Features

- [#5630](https://github.com/gradio-app/gradio/pull/5630) [`0b4fd5b6d`](https://github.com/gradio-app/gradio/commit/0b4fd5b6db96fc95a155e5e935e17e1ab11d1161) - Fix esbuild. Thanks [@pngwn](https://github.com/pngwn)!

## 3.45.0-beta.5

### Features

- [#5624](https://github.com/gradio-app/gradio/pull/5624) [`14fc612d8`](https://github.com/gradio-app/gradio/commit/14fc612d84bf6b1408eccd3a40fab41f25477571) - Fix esbuild. Thanks [@pngwn](https://github.com/pngwn)!

## 3.45.0-beta.4

### Features

- [#5620](https://github.com/gradio-app/gradio/pull/5620) [`c4c25ecdf`](https://github.com/gradio-app/gradio/commit/c4c25ecdf8c2fab5e3c41b519564e3b6a9ebfce3) - fix build and broken imports. Thanks [@pngwn](https://github.com/pngwn)!

## 3.45.0-beta.3

### Features

- [#5618](https://github.com/gradio-app/gradio/pull/5618) [`327cc4a6c`](https://github.com/gradio-app/gradio/commit/327cc4a6c1a213238cecd21f2b6c9cedc64bde5b) - Add docstring to trigger release. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!

## 3.45.0-beta.2

### Features

- [#5615](https://github.com/gradio-app/gradio/pull/5615) [`142880ba5`](https://github.com/gradio-app/gradio/commit/142880ba589126d98da3d6a38866828864cc6b81) - Publish js theme. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!
- [#5613](https://github.com/gradio-app/gradio/pull/5613) [`d0b22b6cf`](https://github.com/gradio-app/gradio/commit/d0b22b6cf4345ce9954b166f8b4278f8d3e24472) - backend linting. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!

## 3.45.0-beta.1

### Features

- [#5610](https://github.com/gradio-app/gradio/pull/5610) [`73f2e8e7e`](https://github.com/gradio-app/gradio/commit/73f2e8e7e426e80e397b5bf23b3a64b0dd6f4e09) - Fix js deps in cli and add gradio-preview artifacts to build. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!

## 3.45.0-beta.0

### Features

- [#5507](https://github.com/gradio-app/gradio/pull/5507) [`1385dc688`](https://github.com/gradio-app/gradio/commit/1385dc6881f2d8ae7a41106ec21d33e2ef04d6a9) - Custom components. Thanks [@pngwn](https://github.com/pngwn)!
- [#5498](https://github.com/gradio-app/gradio/pull/5498) [`681f10c31`](https://github.com/gradio-app/gradio/commit/681f10c315a75cc8cd0473c9a0167961af7696db) - release first version. Thanks [@pngwn](https://github.com/pngwn)!
- [#5589](https://github.com/gradio-app/gradio/pull/5589) [`af1b2f9ba`](https://github.com/gradio-app/gradio/commit/af1b2f9bafbacf2804fcfe68af6bb4b921442aca) - image fixes. Thanks [@pngwn](https://github.com/pngwn)!
- [#5240](https://github.com/gradio-app/gradio/pull/5240) [`da05e59a5`](https://github.com/gradio-app/gradio/commit/da05e59a53bbad15e5755a47f46685da18e1031e) - Cleanup of .update and .get_config per component. Thanks [@aliabid94](https://github.com/aliabid94)!/n get_config is removed, the config used is simply any attribute that is in the Block that shares a name with one of the constructor paramaters./n update is not removed for backwards compatibility, but deprecated. Instead return the component itself. Created a updateable decorator that simply checks to see if we're in an update, and if so, skips the constructor and wraps the args and kwargs in an update dictionary. easy peasy.

### Fixes
- [#5602](https://github.com/gradio-app/gradio/pull/5602) [`54d21d3f1`](https://github.com/gradio-app/gradio/commit/54d21d3f18f2ddd4e796d149a0b41461f49c711b) - Ensure `HighlightedText` with `merge_elements` loads without a value. Thanks [@hannahblair](https://github.com/hannahblair)!
- [#5636](https://github.com/gradio-app/gradio/pull/5636) [`fb5964fb8`](https://github.com/gradio-app/gradio/commit/fb5964fb88082e7b956853b543c468116811cab9) - Fix bug in example cache loading event. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!
- [#5633](https://github.com/gradio-app/gradio/pull/5633) [`341402337`](https://github.com/gradio-app/gradio/commit/34140233794c29d4722020e13c2d045da642dfae) - Allow Gradio apps containing `gr.Radio()`, `gr.Checkboxgroup()`, or `gr.Dropdown()` to be loaded with `gr.load()`. Thanks [@abidlabs](https://github.com/abidlabs)!
Expand Down
5 changes: 4 additions & 1 deletion client/js/src/client.ts
Expand Up @@ -255,7 +255,10 @@ export function api_factory(
};

const transform_files = normalise_files ?? true;
if (typeof window === "undefined" || !("WebSocket" in window)) {
if (
(typeof window === "undefined" || !("WebSocket" in window)) &&
!global.Websocket
) {
const ws = await import("ws");
NodeBlob = (await import("node:buffer")).Blob;
//@ts-ignore
Expand Down
2 changes: 1 addition & 1 deletion demo/Echocardiogram-Segmentation/run.ipynb
@@ -1 +1 @@
{"cells": [{"cell_type": "markdown", "id": 302934307671667531413257853548643485645, "metadata": {}, "source": ["# Gradio Demo: Echocardiogram-Segmentation"]}, {"cell_type": "code", "execution_count": null, "id": 272996653310673477252411125948039410165, "metadata": {}, "outputs": [], "source": ["!pip install -q gradio -f https://download.pytorch.org/whl/torch_stable.html numpy matplotlib wget torch torchvision "]}, {"cell_type": "code", "execution_count": null, "id": 288918539441861185822528903084949547379, "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/Echocardiogram-Segmentation/img1.jpg\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/Echocardiogram-Segmentation/img2.jpg"]}, {"cell_type": "code", "execution_count": null, "id": 44380577570523278879349135829904343037, "metadata": {}, "outputs": [], "source": ["import os\n", "import numpy as np\n", "import torch\n", "import torchvision\n", "import wget \n", "\n", "\n", "destination_folder = \"output\"\n", "destination_for_weights = \"weights\"\n", "\n", "if os.path.exists(destination_for_weights):\n", " print(\"The weights are at\", destination_for_weights)\n", "else:\n", " print(\"Creating folder at \", destination_for_weights, \" to store weights\")\n", " os.mkdir(destination_for_weights)\n", " \n", "segmentationWeightsURL = 'https://github.com/douyang/EchoNetDynamic/releases/download/v1.0.0/deeplabv3_resnet50_random.pt'\n", "\n", "if not os.path.exists(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL))):\n", " print(\"Downloading Segmentation Weights, \", segmentationWeightsURL,\" to \",os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)))\n", " filename = wget.download(segmentationWeightsURL, out = destination_for_weights)\n", "else:\n", " print(\"Segmentation Weights already present\")\n", "\n", "torch.cuda.empty_cache()\n", "\n", "def collate_fn(x):\n", " x, f = zip(*x)\n", " i = list(map(lambda t: t.shape[1], x))\n", " x = torch.as_tensor(np.swapaxes(np.concatenate(x, 1), 0, 1))\n", " return x, f, i\n", "\n", "model = torchvision.models.segmentation.deeplabv3_resnet50(pretrained=False, aux_loss=False)\n", "model.classifier[-1] = torch.nn.Conv2d(model.classifier[-1].in_channels, 1, kernel_size=model.classifier[-1].kernel_size)\n", "\n", "print(\"loading weights from \", os.path.join(destination_for_weights, \"deeplabv3_resnet50_random\"))\n", "\n", "if torch.cuda.is_available():\n", " print(\"cuda is available, original weights\")\n", " device = torch.device(\"cuda\")\n", " model = torch.nn.DataParallel(model)\n", " model.to(device)\n", " checkpoint = torch.load(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)))\n", " model.load_state_dict(checkpoint['state_dict'])\n", "else:\n", " print(\"cuda is not available, cpu weights\")\n", " device = torch.device(\"cpu\")\n", " checkpoint = torch.load(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)), map_location = \"cpu\")\n", " state_dict_cpu = {k[7:]: v for (k, v) in checkpoint['state_dict'].items()}\n", " model.load_state_dict(state_dict_cpu)\n", "\n", "model.eval()\n", "\n", "def segment(input):\n", " inp = input\n", " x = inp.transpose([2, 0, 1]) # channels-first\n", " x = np.expand_dims(x, axis=0) # adding a batch dimension \n", " \n", " mean = x.mean(axis=(0, 2, 3))\n", " std = x.std(axis=(0, 2, 3))\n", " x = x - mean.reshape(1, 3, 1, 1)\n", " x = x / std.reshape(1, 3, 1, 1)\n", " \n", " with torch.no_grad():\n", " x = torch.from_numpy(x).type('torch.FloatTensor').to(device)\n", " output = model(x) \n", " \n", " y = output['out'].numpy()\n", " y = y.squeeze()\n", " \n", " out = y>0 \n", " \n", " mask = inp.copy()\n", " mask[out] = np.array([0, 0, 255])\n", " \n", " return mask\n", "\n", "import gradio as gr\n", "\n", "i = gr.Image(shape=(112, 112), label=\"Echocardiogram\")\n", "o = gr.Image(label=\"Segmentation Mask\")\n", "\n", "examples = [[\"img1.jpg\"], [\"img2.jpg\"]]\n", "title = None #\"Left Ventricle Segmentation\"\n", "description = \"This semantic segmentation model identifies the left ventricle in echocardiogram images.\"\n", "# videos. Accurate evaluation of the motion and size of the left ventricle is crucial for the assessment of cardiac function and ejection fraction. In this interface, the user inputs apical-4-chamber images from echocardiography videos and the model will output a prediction of the localization of the left ventricle in blue. This model was trained on the publicly released EchoNet-Dynamic dataset of 10k echocardiogram videos with 20k expert annotations of the left ventricle and published as part of \u2018Video-based AI for beat-to-beat assessment of cardiac function\u2019 by Ouyang et al. in Nature, 2020.\"\n", "thumbnail = \"https://raw.githubusercontent.com/gradio-app/hub-echonet/master/thumbnail.png\"\n", "gr.Interface(segment, i, o, examples=examples, allow_flagging=False, analytics_enabled=False, thumbnail=thumbnail, cache_examples=False).launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
{"cells": [{"cell_type": "markdown", "id": "302934307671667531413257853548643485645", "metadata": {}, "source": ["# Gradio Demo: Echocardiogram-Segmentation"]}, {"cell_type": "code", "execution_count": null, "id": "272996653310673477252411125948039410165", "metadata": {}, "outputs": [], "source": ["!pip install -q gradio -f https://download.pytorch.org/whl/torch_stable.html numpy matplotlib wget torch torchvision "]}, {"cell_type": "code", "execution_count": null, "id": "288918539441861185822528903084949547379", "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/Echocardiogram-Segmentation/img1.jpg\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/Echocardiogram-Segmentation/img2.jpg"]}, {"cell_type": "code", "execution_count": null, "id": "44380577570523278879349135829904343037", "metadata": {}, "outputs": [], "source": ["import os\n", "import numpy as np\n", "import torch\n", "import torchvision\n", "import wget \n", "\n", "\n", "destination_folder = \"output\"\n", "destination_for_weights = \"weights\"\n", "\n", "if os.path.exists(destination_for_weights):\n", " print(\"The weights are at\", destination_for_weights)\n", "else:\n", " print(\"Creating folder at \", destination_for_weights, \" to store weights\")\n", " os.mkdir(destination_for_weights)\n", " \n", "segmentationWeightsURL = 'https://github.com/douyang/EchoNetDynamic/releases/download/v1.0.0/deeplabv3_resnet50_random.pt'\n", "\n", "if not os.path.exists(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL))):\n", " print(\"Downloading Segmentation Weights, \", segmentationWeightsURL,\" to \",os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)))\n", " filename = wget.download(segmentationWeightsURL, out = destination_for_weights)\n", "else:\n", " print(\"Segmentation Weights already present\")\n", "\n", "torch.cuda.empty_cache()\n", "\n", "def collate_fn(x):\n", " x, f = zip(*x)\n", " i = list(map(lambda t: t.shape[1], x))\n", " x = torch.as_tensor(np.swapaxes(np.concatenate(x, 1), 0, 1))\n", " return x, f, i\n", "\n", "model = torchvision.models.segmentation.deeplabv3_resnet50(pretrained=False, aux_loss=False)\n", "model.classifier[-1] = torch.nn.Conv2d(model.classifier[-1].in_channels, 1, kernel_size=model.classifier[-1].kernel_size)\n", "\n", "print(\"loading weights from \", os.path.join(destination_for_weights, \"deeplabv3_resnet50_random\"))\n", "\n", "if torch.cuda.is_available():\n", " print(\"cuda is available, original weights\")\n", " device = torch.device(\"cuda\")\n", " model = torch.nn.DataParallel(model)\n", " model.to(device)\n", " checkpoint = torch.load(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)))\n", " model.load_state_dict(checkpoint['state_dict'])\n", "else:\n", " print(\"cuda is not available, cpu weights\")\n", " device = torch.device(\"cpu\")\n", " checkpoint = torch.load(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)), map_location = \"cpu\")\n", " state_dict_cpu = {k[7:]: v for (k, v) in checkpoint['state_dict'].items()}\n", " model.load_state_dict(state_dict_cpu)\n", "\n", "model.eval()\n", "\n", "def segment(input):\n", " inp = input\n", " x = inp.transpose([2, 0, 1]) # channels-first\n", " x = np.expand_dims(x, axis=0) # adding a batch dimension \n", " \n", " mean = x.mean(axis=(0, 2, 3))\n", " std = x.std(axis=(0, 2, 3))\n", " x = x - mean.reshape(1, 3, 1, 1)\n", " x = x / std.reshape(1, 3, 1, 1)\n", " \n", " with torch.no_grad():\n", " x = torch.from_numpy(x).type('torch.FloatTensor').to(device)\n", " output = model(x) \n", " \n", " y = output['out'].numpy()\n", " y = y.squeeze()\n", " \n", " out = y>0 \n", " \n", " mask = inp.copy()\n", " mask[out] = np.array([0, 0, 255])\n", " \n", " return mask\n", "\n", "import gradio as gr\n", "\n", "i = gr.Image(shape=(112, 112), label=\"Echocardiogram\")\n", "o = gr.Image(label=\"Segmentation Mask\")\n", "\n", "examples = [[\"img1.jpg\"], [\"img2.jpg\"]]\n", "title = None #\"Left Ventricle Segmentation\"\n", "description = \"This semantic segmentation model identifies the left ventricle in echocardiogram images.\"\n", "# videos. Accurate evaluation of the motion and size of the left ventricle is crucial for the assessment of cardiac function and ejection fraction. In this interface, the user inputs apical-4-chamber images from echocardiography videos and the model will output a prediction of the localization of the left ventricle in blue. This model was trained on the publicly released EchoNet-Dynamic dataset of 10k echocardiogram videos with 20k expert annotations of the left ventricle and published as part of \u2018Video-based AI for beat-to-beat assessment of cardiac function\u2019 by Ouyang et al. in Nature, 2020.\"\n", "thumbnail = \"https://raw.githubusercontent.com/gradio-app/hub-echonet/master/thumbnail.png\"\n", "gr.Interface(segment, i, o, examples=examples, allow_flagging=False, analytics_enabled=False, thumbnail=thumbnail, cache_examples=False).launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}

0 comments on commit 328df87

Please sign in to comment.