Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
134 changes: 129 additions & 5 deletions examples/basics/projects.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
{
Copy link
Contributor

@ovalle15 ovalle15 Jan 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add some text here about why are we creating ontology and labels, and re-format some of the code blocks like we discussed?


Reply via ReviewNB

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey! I made these changes. Let me know what you think.

"metadata": {},
"source": [
"!pip install labelbox -q"
"!pip install -q \"labelbox[data]\""
],
"cell_type": "code",
"outputs": [],
Expand All @@ -58,7 +58,7 @@
"metadata": {},
"source": [
"import labelbox as lb\n",
"import os\n",
"import labelbox.types as lb_types\n",
"import uuid"
],
"cell_type": "code",
Expand All @@ -76,8 +76,9 @@
{
"metadata": {},
"source": [
"# Add your api key\n",
"# Add your API key\n",
"API_KEY = \"\"\n",
"# To get your API key go to: Workspace settings -> API -> Create API Key\n",
"client = lb.Client(api_key=API_KEY)"
],
"cell_type": "code",
Expand Down Expand Up @@ -154,6 +155,127 @@
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"### Attach ontology and label data rows\n",
"\n",
"In this section, we are creating an ontology to attach to a project and creating labels to import as ground truths. We need this setup to demonstrate other methods later in the demo. For more information, please reference our [Ontology](https://docs.labelbox.com/reference/ontology) and [Import Image Annotation](https://docs.labelbox.com/reference/import-image-annotations) development guides."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"Create your ontology"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"# Create normalized json with a radio classification\n",
"ontology_builder = lb.OntologyBuilder(classifications=[ # List of Classification objects\n",
" lb.Classification(class_type=lb.Classification.Type.RADIO,\n",
" name=\"radio_question\",\n",
" options=[\n",
" lb.Option(value=\"first_radio_answer\"),\n",
" lb.Option(value=\"second_radio_answer\")\n",
" ]),\n",
"])\n",
"# Creating an ontology\n",
"ontology = client.create_ontology(\"test-ontology\",\n",
" ontology_builder.asdict())"
],
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"Attach ontology to project"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"\n",
"project.setup_editor(ontology)"
],
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"Create labels and upload them to project as ground truths"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"# Create labels\n",
"labels = []\n",
"for global_key in global_keys:\n",
" labels.append(lb_types.Label(data=lb_types.ImageData(global_key=global_key),\n",
" annotations=[\n",
" # Create radio classification annotation for labels\n",
" lb_types.ClassificationAnnotation(\n",
" name=\"radio_question\",\n",
" value=lb_types.Radio(answer=lb_types.ClassificationAnswer(\n",
" name=\"second_radio_answer\")))\n",
" ]))\n",
"\n",
"# Upload labels for the data rows in project\n",
"upload_job = lb.LabelImport.create_from_objects(\n",
" client = client,\n",
" project_id = project.uid,\n",
" name=\"label_import_job\"+str(uuid.uuid4()),\n",
" labels=labels)\n",
"\n",
"upload_job.wait_until_done()\n",
"\n",
"print(f\"Errors: {upload_job.errors}\")"
],
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"### Move data rows in project to different task queues"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"# Get list of task queues for project\n",
"task_queues = project.task_queues()\n",
"\n",
"for task_queue in task_queues:\n",
" print(task_queue)"
],
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"project.move_data_rows_to_task_queue(data_row_ids=lb.GlobalKeys(global_keys), #Provide a list of global keys\n",
" task_queue_id=task_queues[2].uid #Passing None moves data rows to \"Done\" task queue\n",
")"
],
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
Expand All @@ -180,14 +302,16 @@
{
"metadata": {},
"source": [
"### Delete"
"### Clean Up"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"# project.delete()"
"# project.delete()\n",
"# dataset.delete()\n",
"# client.delete_unused_ontology(ontology.uid)"
],
"cell_type": "code",
"outputs": [],
Expand Down