{"payload":{"pageCount":2,"repositories":[{"type":"Public","name":"cholectrack20","owner":"CAMMA-public","isFork":false,"description":"","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":6,"forksCount":0,"license":"Other","participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-24T13:38:56.725Z"}},{"type":"Public","name":"SSG-VQA","owner":"CAMMA-public","isFork":false,"description":"SSG-VQA is a Visual Question Answering (VQA) dataset on laparoscopic videos providing diverse, geometrically grounded, unbiased and surgical action-oriented queries generated using scene graphs.","topicNames":["scene-graph","vqa-dataset","surgical-data-science"],"topicsNotShown":0,"allTopics":["scene-graph","vqa-dataset","surgical-data-science"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":2,"starsCount":20,"forksCount":1,"license":"Other","participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-23T12:33:46.805Z"}},{"type":"Public template","name":"ivtmetrics","owner":"CAMMA-public","isFork":false,"description":"A Python evaluation metrics package for surgical action triplet recognition","topicNames":["recognition","detection","average-precision","action-triplet","python","machine-learning","metrics","object-detection"],"topicsNotShown":0,"allTopics":["recognition","detection","average-precision","action-triplet","python","machine-learning","metrics","object-detection"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":12,"forksCount":2,"license":"BSD 2-Clause \"Simplified\" License","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-05-15T06:27:19.317Z"}},{"type":"Public","name":"SelfPose3d","owner":"CAMMA-public","isFork":false,"description":"Official code for \"SelfPose3d: Self-Supervised Multi-Person Multi-View 3d Pose Estimation\"","topicNames":["human-pose-estimation","multi-view-learning","self-supervised-learning","3d-human-shape-and-pose-estimation","multi-view-multi-person-3d-human-pose-estimation"],"topicsNotShown":0,"allTopics":["human-pose-estimation","multi-view-learning","self-supervised-learning","3d-human-shape-and-pose-estimation","multi-view-multi-person-3d-human-pose-estimation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":3,"starsCount":11,"forksCount":0,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-19T10:37:26.154Z"}},{"type":"Public","name":"SurgLatentGraph","owner":"CAMMA-public","isFork":false,"description":"This repository contains the code associated with our 2023 TMI paper \"Latent Graph Representations for Critical View of Safety Assessment\" and our MICCAI 2023 paper \"Encoding Surgical Videos as Spatiotemporal Graphs for Object and Anatomy-Driven Reasoning\".","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":16,"forksCount":1,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-04-10T11:42:43.603Z"}},{"type":"Public","name":"surgical-imagen","owner":"CAMMA-public","isFork":false,"description":"","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":{"name":"HTML","color":"#e34c26"},"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":0,"license":null,"participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-03-27T12:27:48.285Z"}},{"type":"Public","name":"med-moco","owner":"CAMMA-public","isFork":false,"description":"Overcoming Dimensional Collapse in Self-supervised Contrastive Learning for Medical Image Segmentation","topicNames":["medical-imaging","self-supervised-learning","dimensional-collapse","local-feature-learning"],"topicsNotShown":0,"allTopics":["medical-imaging","self-supervised-learning","dimensional-collapse","local-feature-learning"],"primaryLanguage":null,"pullRequestCount":0,"issueCount":1,"starsCount":2,"forksCount":0,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2024-02-19T09:25:04.315Z"}},{"type":"Public","name":"MultiBypass140","owner":"CAMMA-public","isFork":false,"description":"","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":3,"forksCount":0,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-12-20T13:13:41.717Z"}},{"type":"Public","name":"Endoscapes","owner":"CAMMA-public","isFork":false,"description":"Official Repository for the Endoscapes Dataset for Surgical Scene Segmentation, Object Detection, and Critical View of Safety Assessment ","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":null,"pullRequestCount":0,"issueCount":0,"starsCount":21,"forksCount":0,"license":"Other","participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,11,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-12-20T12:00:47.597Z"}},{"type":"Public","name":"mcit-ig","owner":"CAMMA-public","isFork":false,"description":"","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":1,"forksCount":0,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-10-04T18:10:35.616Z"}},{"type":"Public","name":"SurgVLP","owner":"CAMMA-public","isFork":false,"description":"Learning multi-modal representations by watching hundreds of surgical video lectures","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":null,"pullRequestCount":0,"issueCount":0,"starsCount":5,"forksCount":0,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-09-07T08:00:24.647Z"}},{"type":"Public","name":"rendezvous","owner":"CAMMA-public","isFork":false,"description":"A transformer-inspired neural network for surgical action triplet recognition from laparoscopic videos.","topicNames":["python","deep-learning","tensorflow","python3","pytorch","transformer","attention-mechanism","action-recognition","weakly-supervised-learning","state-of-the-art"],"topicsNotShown":5,"allTopics":["python","deep-learning","tensorflow","python3","pytorch","transformer","attention-mechanism","action-recognition","weakly-supervised-learning","state-of-the-art","tensorflow2","laparoscopy","cholect45","cholect50","action-triplet"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":4,"starsCount":17,"forksCount":8,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-08-30T10:03:51.496Z"}},{"type":"Public","name":"cholect50","owner":"CAMMA-public","isFork":false,"description":"A repository for surgical action triplet dataset. Data are videos of laparoscopic cholecystectomy that have been annotated with <instrument, verb, target> labels for every surgical fine-grained activity.","topicNames":["python","data","machine-learning","recognition","localization","deep-learning","tensorflow","detection","pytorch","artificial-intelligence"],"topicsNotShown":8,"allTopics":["python","data","machine-learning","recognition","localization","deep-learning","tensorflow","detection","pytorch","artificial-intelligence","action","datasets","endoscopy","laparoscopy","surgical-data-science","cholect45","cholect50","cholect40"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":1,"issueCount":3,"starsCount":33,"forksCount":4,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-08-07T17:42:49.476Z"}},{"type":"Public","name":"rendezvous-in-time","owner":"CAMMA-public","isFork":false,"description":"rendezvous-in-time","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":2,"starsCount":6,"forksCount":3,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-07-05T15:45:02.532Z"}},{"type":"Public","name":"SelfSupSurg","owner":"CAMMA-public","isFork":false,"description":"Official repository for \"Dissecting Self-Supervised Learning Methods for Surgical Computer Vision\"","topicNames":["endoscopic-vision","surgical-phase-recognition","surgical-data-science","surgical-scene-segmentation","surgical-computer-vision","laparascopic-cholecystectomy","deep-learning","semi-supervised-learning","transfer-learning","self-supervised-learning"],"topicsNotShown":0,"allTopics":["endoscopic-vision","surgical-phase-recognition","surgical-data-science","surgical-scene-segmentation","surgical-computer-vision","laparascopic-cholecystectomy","deep-learning","semi-supervised-learning","transfer-learning","self-supervised-learning"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":2,"starsCount":28,"forksCount":5,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-07-02T16:22:21.558Z"}},{"type":"Public","name":"indexity","owner":"CAMMA-public","isFork":false,"description":"Indexity is a web-based tool designed for medical video annotation in surgical data science projects.","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":{"name":"TypeScript","color":"#3178c6"},"pullRequestCount":0,"issueCount":0,"starsCount":8,"forksCount":1,"license":"BSD 3-Clause \"New\" or \"Revised\" License","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-06-27T09:07:43.710Z"}},{"type":"Public","name":"AI4surgery","owner":"CAMMA-public","isFork":false,"description":"Short tutorial supplementing the Neural Networks and Deep Learning chapter from Artificial Intelligence in Surgery: An AI Primer for Surgical Practice designed to offer hands-on experience with artificial neural networks to surgeons with little to no coding experience.","topicNames":["deep-learning","neural-network","cholec-tinytools-dataset"],"topicsNotShown":0,"allTopics":["deep-learning","neural-network","cholec-tinytools-dataset"],"primaryLanguage":{"name":"Jupyter Notebook","color":"#DA5B0B"},"pullRequestCount":0,"issueCount":0,"starsCount":14,"forksCount":4,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-06-23T12:43:58.455Z"}},{"type":"Public","name":"out-of-body-detector","owner":"CAMMA-public","isFork":false,"description":"Application for classifying out-of-body images in endoscopic videos","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":5,"forksCount":1,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-06-07T12:51:20.817Z"}},{"type":"Public","name":"tripnet","owner":"CAMMA-public","isFork":false,"description":"","topicNames":["action-recognition","action-triplet","deep-learning"],"topicsNotShown":0,"allTopics":["action-recognition","action-triplet","deep-learning"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":2,"starsCount":14,"forksCount":3,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-03-01T13:07:22.805Z"}},{"type":"Public","name":"attention-tripnet","owner":"CAMMA-public","isFork":false,"description":"","topicNames":["python","attention-mechanism","action-recognition","action-triplet"],"topicsNotShown":0,"allTopics":["python","attention-mechanism","action-recognition","action-triplet"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":8,"forksCount":1,"license":"Other","participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-03-01T13:07:08.930Z"}},{"type":"Public","name":"cholect45","owner":"CAMMA-public","isFork":false,"description":"Laparoscopic video dataset for surgical action triplet recognition","topicNames":["python","tensorflow","python3","pytorch","tensorflow2","laparoscopy","endoscopic-images","cholect45","cholect50","action-triplet"],"topicsNotShown":3,"allTopics":["python","tensorflow","python3","pytorch","tensorflow2","laparoscopy","endoscopic-images","cholect45","cholect50","action-triplet","cholect40","dataset","action-recognition"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":1,"starsCount":34,"forksCount":2,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-02-08T15:07:35.691Z"}},{"type":"Public","name":"cholectriplet2022","owner":"CAMMA-public","isFork":false,"description":"CholecTriplet 2022 challenge on surgical action triplet detection","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":{"name":"Jupyter Notebook","color":"#DA5B0B"},"pullRequestCount":0,"issueCount":0,"starsCount":9,"forksCount":3,"license":"MIT License","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-01-09T16:09:03.028Z"}},{"type":"Public","name":".github","owner":"CAMMA-public","isFork":false,"description":"","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":null,"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":0,"license":null,"participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2022-12-22T22:13:05.703Z"}},{"type":"Public","name":"TF-Cholec80","owner":"CAMMA-public","isFork":false,"description":"Library packaging the Cholec80 dataset for easy handling with Tensorflow.","topicNames":["video","dataset","surgery"],"topicsNotShown":0,"allTopics":["video","dataset","surgery"],"primaryLanguage":{"name":"Jupyter Notebook","color":"#DA5B0B"},"pullRequestCount":1,"issueCount":3,"starsCount":19,"forksCount":5,"license":"Other","participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2022-12-19T04:00:33.093Z"}},{"type":"Public","name":"HPE-AdaptOR","owner":"CAMMA-public","isFork":false,"description":"Code for Unsupervised domain adaptation for clinician pose estimation and instance segmentation in the OR","topicNames":[],"topicsNotShown":0,"allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":1,"starsCount":3,"forksCount":0,"license":"Other","participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2022-07-07T05:30:03.518Z"}},{"type":"Public","name":"ORPose-Color","owner":"CAMMA-public","isFork":false,"description":"Inference demo for the MICCAI-2020 paper \"Self-supervision on Unlabelled OR Data for Multi-person 2D/3D Human Pose Estimation\"","topicNames":["colab","knowledge-distillation","data-disillation","operating-room","low-resolution-images","human-pose-estimation"],"topicsNotShown":0,"allTopics":["colab","knowledge-distillation","data-disillation","operating-room","low-resolution-images","human-pose-estimation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":11,"forksCount":2,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2022-05-19T08:39:32.090Z"}},{"type":"Public","name":"cholectriplet2021","owner":"CAMMA-public","isFork":false,"description":"","topicNames":["python","challenge","deep-learning","action-recognition","action-triplet"],"topicsNotShown":0,"allTopics":["python","challenge","deep-learning","action-recognition","action-triplet"],"primaryLanguage":{"name":"Jupyter Notebook","color":"#DA5B0B"},"pullRequestCount":0,"issueCount":0,"starsCount":4,"forksCount":2,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2022-04-26T15:52:22.752Z"}},{"type":"Public","name":"cvs_annotator","owner":"CAMMA-public","isFork":false,"description":"Application for reviewing and annotating frames with critical view of safety (CVS) criteria and other relevant information","topicNames":["annotation-tool","surgical-data-science","cholecystectomy","medical-computer-vision"],"topicsNotShown":0,"allTopics":["annotation-tool","surgical-data-science","cholecystectomy","medical-computer-vision"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":2,"starsCount":2,"forksCount":2,"license":"Other","participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2021-08-13T14:18:04.125Z"}},{"type":"Public","name":"MVOR","owner":"CAMMA-public","isFork":false,"description":"Multi-View Operating Room (MVOR) dataset consists of synchronized multi-view frames recorded by three RGB-D cameras in a hybrid OR during real clinical interventions. We provide camera calibration parameters, color and depth frames, human bounding boxes, and 2D/3D pose annotations. The MVOR was released in the MICCAI-LABELS 2018 workshop. ","topicNames":["colab","person-detection","3dposeestimation","multi-view-rgbd-images","operating-room","dataset","human-pose-estimation"],"topicsNotShown":0,"allTopics":["colab","person-detection","3dposeestimation","multi-view-rgbd-images","operating-room","dataset","human-pose-estimation"],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":48,"forksCount":9,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2020-12-11T13:48:14.118Z"}},{"type":"Public","name":"Surgical-Phase-Recognition","owner":"CAMMA-public","isFork":false,"description":"Demo for surgical phase recognition on videos of laparoscopic cholecystectomy using a CNN-biLSTM-CRF model presented in \"Learning from a tiny dataset of manual annotations: a teacher/student approach for surgical phase recognition\" (IPCAI 2019) ","topicNames":["deep-learning","lstm","convolutional-neural-networks"],"topicsNotShown":0,"allTopics":["deep-learning","lstm","convolutional-neural-networks"],"primaryLanguage":{"name":"Jupyter Notebook","color":"#DA5B0B"},"pullRequestCount":0,"issueCount":0,"starsCount":12,"forksCount":2,"license":"Other","participation":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2020-09-30T14:04:56.296Z"}}],"repositoryCount":32,"userInfo":null,"searchable":true,"definitions":[],"typeFilters":[{"id":"all","text":"All"},{"id":"public","text":"Public"},{"id":"source","text":"Sources"},{"id":"fork","text":"Forks"},{"id":"archived","text":"Archived"},{"id":"template","text":"Templates"}],"compactMode":false},"title":"Repositories"}