Skip to content

Commit

Permalink
Merge branch 'main' into remove-deprecation-raise
Browse files Browse the repository at this point in the history
  • Loading branch information
freddyaboulton committed Nov 3, 2023
2 parents cd621c5 + e3ede2f commit 7475f30
Show file tree
Hide file tree
Showing 170 changed files with 3,449 additions and 4,657 deletions.
6 changes: 6 additions & 0 deletions .changeset/better-parents-bow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
"@gradio/app": patch
"gradio": patch
---

fix:Fix updating interactive prop
5 changes: 5 additions & 0 deletions .changeset/cute-poems-cover.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"gradio": minor
---

feat:Remove session if browser closed on mobile
6 changes: 6 additions & 0 deletions .changeset/fifty-baboons-shout.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
"@gradio/radio": patch
"gradio": patch
---

fix:Fix bug where radio.select passes the previous value to the function instead of the selected value
6 changes: 6 additions & 0 deletions .changeset/fruity-peaches-appear.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
"@gradio/model3d": minor
"gradio": minor
---

feat:Model3D panning, improved UX
5 changes: 5 additions & 0 deletions .changeset/hot-donuts-notice.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"website": patch
---

feat:Some tweaks to the Custom Components Guide
5 changes: 5 additions & 0 deletions .changeset/kind-toys-stop.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"gradio": minor
---

feat:Fixes input `Image` component with `streaming=True`
5 changes: 5 additions & 0 deletions .changeset/major-bushes-ring.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"gradio": patch
---

fix:Add likeable to config for Chatbot
10 changes: 10 additions & 0 deletions .changeset/polite-radios-kiss.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
"@gradio/app": patch
"@gradio/audio": patch
"@gradio/tootils": patch
"@gradio/upload": patch
"@gradio/video": patch
"gradio": patch
---

fix:Video/Audio fixes
7 changes: 7 additions & 0 deletions .changeset/pretty-doodles-work.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
"@gradio/radio": patch
"@gradio/tootils": patch
"gradio": patch
---

fix:ensure radios have different names
5 changes: 5 additions & 0 deletions .changeset/public-banks-film.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"gradio": patch
---

fix:Fixes: Initial message is overwrtitten in chat interface
8 changes: 8 additions & 0 deletions .changeset/public-hotels-divide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
"@gradio/app": patch
"@gradio/checkboxgroup": patch
"@gradio/tootils": patch
"gradio": patch
---

fix:Ensure `gr.CheckboxGroup` updates as expected.
7 changes: 7 additions & 0 deletions .changeset/purple-areas-see.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
"@gradio/app": minor
"@gradio/preview": minor
"gradio": minor
---

feat:Improve Embed and CDN handling and fix a couple of related bugs
6 changes: 6 additions & 0 deletions .changeset/quick-cases-stay.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
"gradio": minor
"website": minor
---

feat:Fix various issues with demos on website
6 changes: 6 additions & 0 deletions .changeset/sour-sheep-look.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
"@gradio/model3d": minor
"gradio": minor
---

feat:Ensure Model 3D updates when attributes change
6 changes: 6 additions & 0 deletions .changeset/upset-radios-switch.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
"@gradio/fileexplorer": patch
"gradio": patch
---

fix:Fix file overflow and add keyboard navigation to `FileExplorer`
5 changes: 4 additions & 1 deletion .config/.prettierignore
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,7 @@
sweep.yaml
**/.vercel/**
**/build/**
**/*.md
**/*.md
**/src/lib/json/**/*
**/playwright/.cache/**/*
**/theme/src/pollen.css
5 changes: 1 addition & 4 deletions .config/basevite.config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,7 @@ const version = JSON.parse(readFileSync(version_path, { encoding: "utf-8" }))

//@ts-ignore
export default defineConfig(({ mode }) => {
const production =
mode === "production:cdn" ||
mode === "production:local" ||
mode === "production:website";
const production = mode === "production";

return {
server: {
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/deploy-pr-to-spaces.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ jobs:
github.event.workflow_run.conclusion == 'success'
id: upload-website-demos
run: |
python scripts/upload_website_demos.py --AUTH_TOKEN ${{ secrets.SPACES_DEPLOY_TOKEN }} \
python scripts/upload_website_demos.py --AUTH_TOKEN ${{ secrets.WEBSITE_SPACES_DEPLOY_TOKEN }} \
--WHEEL_URL https://gradio-builds.s3.amazonaws.com/${{ steps.set-outputs.outputs.gh_sha }}/ \
--GRADIO_VERSION ${{ steps.set-outputs.outputs.gradio_version }}
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/ui.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@ jobs:
run: pnpm --filter @gradio/client build
- name: build the wasm module
run: pnpm --filter @gradio/wasm build
- name: format check
run: pnpm format:check
- name: lint
run: pnpm lint
- name: typecheck
Expand Down
3 changes: 2 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,5 +18,6 @@
"eslint.experimental.useFlatConfig": true,
"eslint.options": {
"overrideConfigFile": "./.config/eslint.config.js"
}
},
"typescript.tsdk": "node_modules/typescript/lib"
}
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ def sepia(input_img):
sepia_img /= sepia_img.max()
return sepia_img

demo = gr.Interface(sepia, gr.Image(shape=(200, 200)), "image")
demo = gr.Interface(sepia, gr.Image(width=200, height=200), "image")
demo.launch()
```

Expand Down
5 changes: 1 addition & 4 deletions build_pypi.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,9 @@ new_version=$(python -c "import json; f = open('$FILE', 'r'); data = json.load(f
GRADIO_VERSION=$new_version

rm -rf gradio/templates/frontend
rm -rf gradio/templates/cdn
pnpm i --frozen-lockfile --ignore-scripts
GRADIO_VERSION=$new_version pnpm build
GRADIO_VERSION=$new_version pnpm build:cdn
aws s3 cp gradio/templates/cdn "s3://gradio/${new_version}/" --recursive --region us-west-2
cp gradio/templates/cdn/index.html gradio/templates/frontend/share.html
aws s3 cp gradio/templates/frontend "s3://gradio/${new_version}/" --recursive --region us-west-2

rm -rf dist/*
rm -rf build/*
Expand Down
2 changes: 1 addition & 1 deletion demo/Echocardiogram-Segmentation/run.ipynb
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{"cells": [{"cell_type": "markdown", "id": "302934307671667531413257853548643485645", "metadata": {}, "source": ["# Gradio Demo: Echocardiogram-Segmentation"]}, {"cell_type": "code", "execution_count": null, "id": "272996653310673477252411125948039410165", "metadata": {}, "outputs": [], "source": ["!pip install -q gradio -f https://download.pytorch.org/whl/torch_stable.html numpy matplotlib wget torch torchvision "]}, {"cell_type": "code", "execution_count": null, "id": "288918539441861185822528903084949547379", "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/Echocardiogram-Segmentation/img1.jpg\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/Echocardiogram-Segmentation/img2.jpg"]}, {"cell_type": "code", "execution_count": null, "id": "44380577570523278879349135829904343037", "metadata": {}, "outputs": [], "source": ["import os\n", "import numpy as np\n", "import torch\n", "import torchvision\n", "import wget \n", "\n", "\n", "destination_folder = \"output\"\n", "destination_for_weights = \"weights\"\n", "\n", "if os.path.exists(destination_for_weights):\n", " print(\"The weights are at\", destination_for_weights)\n", "else:\n", " print(\"Creating folder at \", destination_for_weights, \" to store weights\")\n", " os.mkdir(destination_for_weights)\n", " \n", "segmentationWeightsURL = 'https://github.com/douyang/EchoNetDynamic/releases/download/v1.0.0/deeplabv3_resnet50_random.pt'\n", "\n", "if not os.path.exists(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL))):\n", " print(\"Downloading Segmentation Weights, \", segmentationWeightsURL,\" to \",os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)))\n", " filename = wget.download(segmentationWeightsURL, out = destination_for_weights)\n", "else:\n", " print(\"Segmentation Weights already present\")\n", "\n", "torch.cuda.empty_cache()\n", "\n", "def collate_fn(x):\n", " x, f = zip(*x)\n", " i = list(map(lambda t: t.shape[1], x))\n", " x = torch.as_tensor(np.swapaxes(np.concatenate(x, 1), 0, 1))\n", " return x, f, i\n", "\n", "model = torchvision.models.segmentation.deeplabv3_resnet50(pretrained=False, aux_loss=False)\n", "model.classifier[-1] = torch.nn.Conv2d(model.classifier[-1].in_channels, 1, kernel_size=model.classifier[-1].kernel_size)\n", "\n", "print(\"loading weights from \", os.path.join(destination_for_weights, \"deeplabv3_resnet50_random\"))\n", "\n", "if torch.cuda.is_available():\n", " print(\"cuda is available, original weights\")\n", " device = torch.device(\"cuda\")\n", " model = torch.nn.DataParallel(model)\n", " model.to(device)\n", " checkpoint = torch.load(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)))\n", " model.load_state_dict(checkpoint['state_dict'])\n", "else:\n", " print(\"cuda is not available, cpu weights\")\n", " device = torch.device(\"cpu\")\n", " checkpoint = torch.load(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)), map_location = \"cpu\")\n", " state_dict_cpu = {k[7:]: v for (k, v) in checkpoint['state_dict'].items()}\n", " model.load_state_dict(state_dict_cpu)\n", "\n", "model.eval()\n", "\n", "def segment(input):\n", " inp = input\n", " x = inp.transpose([2, 0, 1]) # channels-first\n", " x = np.expand_dims(x, axis=0) # adding a batch dimension \n", " \n", " mean = x.mean(axis=(0, 2, 3))\n", " std = x.std(axis=(0, 2, 3))\n", " x = x - mean.reshape(1, 3, 1, 1)\n", " x = x / std.reshape(1, 3, 1, 1)\n", " \n", " with torch.no_grad():\n", " x = torch.from_numpy(x).type('torch.FloatTensor').to(device)\n", " output = model(x) \n", " \n", " y = output['out'].numpy()\n", " y = y.squeeze()\n", " \n", " out = y>0 \n", " \n", " mask = inp.copy()\n", " mask[out] = np.array([0, 0, 255])\n", " \n", " return mask\n", "\n", "import gradio as gr\n", "\n", "i = gr.Image(shape=(112, 112), label=\"Echocardiogram\")\n", "o = gr.Image(label=\"Segmentation Mask\")\n", "\n", "examples = [[\"img1.jpg\"], [\"img2.jpg\"]]\n", "title = None #\"Left Ventricle Segmentation\"\n", "description = \"This semantic segmentation model identifies the left ventricle in echocardiogram images.\"\n", "# videos. Accurate evaluation of the motion and size of the left ventricle is crucial for the assessment of cardiac function and ejection fraction. In this interface, the user inputs apical-4-chamber images from echocardiography videos and the model will output a prediction of the localization of the left ventricle in blue. This model was trained on the publicly released EchoNet-Dynamic dataset of 10k echocardiogram videos with 20k expert annotations of the left ventricle and published as part of \u2018Video-based AI for beat-to-beat assessment of cardiac function\u2019 by Ouyang et al. in Nature, 2020.\"\n", "thumbnail = \"https://raw.githubusercontent.com/gradio-app/hub-echonet/master/thumbnail.png\"\n", "gr.Interface(segment, i, o, examples=examples, allow_flagging=False, analytics_enabled=False, thumbnail=thumbnail, cache_examples=False).launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
{"cells": [{"cell_type": "markdown", "id": "302934307671667531413257853548643485645", "metadata": {}, "source": ["# Gradio Demo: Echocardiogram-Segmentation"]}, {"cell_type": "code", "execution_count": null, "id": "272996653310673477252411125948039410165", "metadata": {}, "outputs": [], "source": ["!pip install -q gradio -f https://download.pytorch.org/whl/torch_stable.html numpy matplotlib wget torch torchvision "]}, {"cell_type": "code", "execution_count": null, "id": "288918539441861185822528903084949547379", "metadata": {}, "outputs": [], "source": ["# Downloading files from the demo repo\n", "import os\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/Echocardiogram-Segmentation/img1.jpg\n", "!wget -q https://github.com/gradio-app/gradio/raw/main/demo/Echocardiogram-Segmentation/img2.jpg"]}, {"cell_type": "code", "execution_count": null, "id": "44380577570523278879349135829904343037", "metadata": {}, "outputs": [], "source": ["import os\n", "import numpy as np\n", "import torch\n", "import torchvision\n", "import wget \n", "\n", "\n", "destination_folder = \"output\"\n", "destination_for_weights = \"weights\"\n", "\n", "if os.path.exists(destination_for_weights):\n", " print(\"The weights are at\", destination_for_weights)\n", "else:\n", " print(\"Creating folder at \", destination_for_weights, \" to store weights\")\n", " os.mkdir(destination_for_weights)\n", " \n", "segmentationWeightsURL = 'https://github.com/douyang/EchoNetDynamic/releases/download/v1.0.0/deeplabv3_resnet50_random.pt'\n", "\n", "if not os.path.exists(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL))):\n", " print(\"Downloading Segmentation Weights, \", segmentationWeightsURL,\" to \",os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)))\n", " filename = wget.download(segmentationWeightsURL, out = destination_for_weights)\n", "else:\n", " print(\"Segmentation Weights already present\")\n", "\n", "torch.cuda.empty_cache()\n", "\n", "def collate_fn(x):\n", " x, f = zip(*x)\n", " i = list(map(lambda t: t.shape[1], x))\n", " x = torch.as_tensor(np.swapaxes(np.concatenate(x, 1), 0, 1))\n", " return x, f, i\n", "\n", "model = torchvision.models.segmentation.deeplabv3_resnet50(pretrained=False, aux_loss=False)\n", "model.classifier[-1] = torch.nn.Conv2d(model.classifier[-1].in_channels, 1, kernel_size=model.classifier[-1].kernel_size)\n", "\n", "print(\"loading weights from \", os.path.join(destination_for_weights, \"deeplabv3_resnet50_random\"))\n", "\n", "if torch.cuda.is_available():\n", " print(\"cuda is available, original weights\")\n", " device = torch.device(\"cuda\")\n", " model = torch.nn.DataParallel(model)\n", " model.to(device)\n", " checkpoint = torch.load(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)))\n", " model.load_state_dict(checkpoint['state_dict'])\n", "else:\n", " print(\"cuda is not available, cpu weights\")\n", " device = torch.device(\"cpu\")\n", " checkpoint = torch.load(os.path.join(destination_for_weights, os.path.basename(segmentationWeightsURL)), map_location = \"cpu\")\n", " state_dict_cpu = {k[7:]: v for (k, v) in checkpoint['state_dict'].items()}\n", " model.load_state_dict(state_dict_cpu)\n", "\n", "model.eval()\n", "\n", "def segment(input):\n", " inp = input\n", " x = inp.transpose([2, 0, 1]) # channels-first\n", " x = np.expand_dims(x, axis=0) # adding a batch dimension \n", " \n", " mean = x.mean(axis=(0, 2, 3))\n", " std = x.std(axis=(0, 2, 3))\n", " x = x - mean.reshape(1, 3, 1, 1)\n", " x = x / std.reshape(1, 3, 1, 1)\n", " \n", " with torch.no_grad():\n", " x = torch.from_numpy(x).type('torch.FloatTensor').to(device)\n", " output = model(x) \n", " \n", " y = output['out'].numpy()\n", " y = y.squeeze()\n", " \n", " out = y>0 \n", " \n", " mask = inp.copy()\n", " mask[out] = np.array([0, 0, 255])\n", " \n", " return mask\n", "\n", "import gradio as gr\n", "\n", "i = gr.Image(label=\"Echocardiogram\")\n", "o = gr.Image(label=\"Segmentation Mask\")\n", "\n", "examples = [[\"img1.jpg\"], [\"img2.jpg\"]]\n", "title = None #\"Left Ventricle Segmentation\"\n", "description = \"This semantic segmentation model identifies the left ventricle in echocardiogram images.\"\n", "# videos. Accurate evaluation of the motion and size of the left ventricle is crucial for the assessment of cardiac function and ejection fraction. In this interface, the user inputs apical-4-chamber images from echocardiography videos and the model will output a prediction of the localization of the left ventricle in blue. This model was trained on the publicly released EchoNet-Dynamic dataset of 10k echocardiogram videos with 20k expert annotations of the left ventricle and published as part of \u2018Video-based AI for beat-to-beat assessment of cardiac function\u2019 by Ouyang et al. in Nature, 2020.\"\n", "thumbnail = \"https://raw.githubusercontent.com/gradio-app/hub-echonet/master/thumbnail.png\"\n", "gr.Interface(segment, i, o, examples=examples, analytics_enabled=False, thumbnail=thumbnail, cache_examples=False).launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
4 changes: 2 additions & 2 deletions demo/Echocardiogram-Segmentation/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,12 @@ def segment(input):

import gradio as gr

i = gr.Image(shape=(112, 112), label="Echocardiogram")
i = gr.Image(label="Echocardiogram")
o = gr.Image(label="Segmentation Mask")

examples = [["img1.jpg"], ["img2.jpg"]]
title = None #"Left Ventricle Segmentation"
description = "This semantic segmentation model identifies the left ventricle in echocardiogram images."
# videos. Accurate evaluation of the motion and size of the left ventricle is crucial for the assessment of cardiac function and ejection fraction. In this interface, the user inputs apical-4-chamber images from echocardiography videos and the model will output a prediction of the localization of the left ventricle in blue. This model was trained on the publicly released EchoNet-Dynamic dataset of 10k echocardiogram videos with 20k expert annotations of the left ventricle and published as part of ‘Video-based AI for beat-to-beat assessment of cardiac function’ by Ouyang et al. in Nature, 2020."
thumbnail = "https://raw.githubusercontent.com/gradio-app/hub-echonet/master/thumbnail.png"
gr.Interface(segment, i, o, examples=examples, allow_flagging=False, analytics_enabled=False, thumbnail=thumbnail, cache_examples=False).launch()
gr.Interface(segment, i, o, examples=examples, analytics_enabled=False, thumbnail=thumbnail, cache_examples=False).launch()

0 comments on commit 7475f30

Please sign in to comment.