Skip to content
#

dataset

Here are 4,314 public repositories matching this topic...

ljades
ljades commented Feb 19, 2021

How to reproduce the behaviour

The error occurs in the Step 5/9 of the docker build process

fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz: BAD signature
WARNING: Ignoring http
Zillibub
Zillibub commented Feb 23, 2021

🐛🐛 Bug Report

⚗️ Current Behavior

Launch of an large_dataset_build.py falls with an exception

  File "hub\store\shape_detector.py", line 107, in _get_chunks
    assert chunks[0] == 1

The chuck value in schema in given example is chunks=(2, 1920, 1080, 3). So to run this code without exceptions I need to change it to chunks=(1, 1920, 1080, 3). Why is there an asserti

bloodwass
bloodwass commented Jun 17, 2019

Expected Behavior

I want to convert torch.nn.Linear modules to weight drop linear modules in my model (possibly big), and I want to train my model with multi-GPUs. However, I have RuntimeError in my sample code. First, I have _weight_drop() which drops some part of weights in torch.nn.Linear (see the code below).

Actual Behavior

RuntimeError: arguments are located on different GPUs at /

CedricProfessionnel
CedricProfessionnel commented Jan 19, 2021

Each item in the list should be written in the same language as what it's represent
Ex. :
The French choice should be Français
The Portuguese should be Português
The Chinese should be 中文
The Dutch should be Holländisch
(The traduction come from google traduction and should be double check)

![image](https://user-images.githubusercontent.com/74309042/105092026-b4229600-5a6e-11eb-9e6a-394b0

Objectron is a dataset of short, object-centric video clips. In addition, the videos also contain AR session metadata including camera poses, sparse point-clouds and planes. In each video, the camera moves around and above the object and captures it from different views. Each object is annotated with a 3D bounding box. The 3D bounding box describes the object’s position, orientation, and dimensions. The dataset contains about 15K annotated video clips and 4M annotated images in the following categories: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes

  • Updated Feb 19, 2021
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the dataset topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the dataset topic, visit your repo's landing page and select "manage topics."

Learn more