{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":788088792,"defaultBranch":"main","name":"autoformalism-with-llms","ownerLogin":"agencyenterprise","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2024-04-17T18:49:25.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/831220?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1713800933.0","currentOid":""},"activityList":{"items":[{"before":"4c7bd0968553b556c2b3842b74d4a1e7663d4b07","after":"e83f71ff9465a5fe2daa0d8bf410a0939450b804","ref":"refs/heads/main","pushedAt":"2024-05-02T20:47:44.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"vaiana","name":"Michael Vaiana","path":"/vaiana","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22175020?s=80&v=4"},"commit":{"message":"Update README","shortMessageHtmlLink":"Update README"}},{"before":"9c6f61ada53c4e49a30e49edbee0aff4a8b1b63b","after":"4c7bd0968553b556c2b3842b74d4a1e7663d4b07","ref":"refs/heads/main","pushedAt":"2024-05-02T20:43:19.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"vaiana","name":"Michael Vaiana","path":"/vaiana","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22175020?s=80&v=4"},"commit":{"message":"Add grid to plots","shortMessageHtmlLink":"Add grid to plots"}},{"before":"c0846c42828cd40d9aa434d5a8884119a17a5f2e","after":"9c6f61ada53c4e49a30e49edbee0aff4a8b1b63b","ref":"refs/heads/main","pushedAt":"2024-05-02T20:23:33.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"vaiana","name":"Michael Vaiana","path":"/vaiana","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22175020?s=80&v=4"},"commit":{"message":"Add llama3 experiment","shortMessageHtmlLink":"Add llama3 experiment"}},{"before":"fbc243e14922dfa8a79cae79fc67e452ab3b5a41","after":"c0846c42828cd40d9aa434d5a8884119a17a5f2e","ref":"refs/heads/main","pushedAt":"2024-05-02T19:53:06.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"vaiana","name":"Michael Vaiana","path":"/vaiana","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22175020?s=80&v=4"},"commit":{"message":"Add llama3 experiment","shortMessageHtmlLink":"Add llama3 experiment"}},{"before":null,"after":"bb0776a323b08bc6cbbf9ef5d41b7d8e7127aa74","ref":"refs/heads/llama3","pushedAt":"2024-04-22T15:48:53.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"vaiana","name":"Michael Vaiana","path":"/vaiana","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22175020?s=80&v=4"},"commit":{"message":"WIP Share with Florin\n\nI can't get LLama3 to run on multiple GPUs on my home machine.\nWhen I set CUDA_VISIBLE_DEVICES=0 and use `device_map=\"auto\"` things\nwork for 8B and 70B but its really slow, especially for 70B which\nrequires offloading 100GBs of data to disk. In fact, the benchmark\ntoo 700 seconds (more than 10 minutes) to generate 10 tokens. This\nis too slow to try to run the autoformalism experiment.\n\nWhen I set the device map by restricting GPU memory, it will load\nthe model without error but then I get a device mismatch error in\na residual connection (tensor on cuda:0 expected on cuda:1). Setting\nthe no_split_module_classes did not resolve this error. So I'm stuck\nwith either only 1 GPU and slow or errors on multiple GPUS.","shortMessageHtmlLink":"WIP Share with Florin"}},{"before":null,"after":"fbc243e14922dfa8a79cae79fc67e452ab3b5a41","ref":"refs/heads/main","pushedAt":"2024-04-17T18:49:42.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"vaiana","name":"Michael Vaiana","path":"/vaiana","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22175020?s=80&v=4"},"commit":{"message":"Add in-context results","shortMessageHtmlLink":"Add in-context results"}}],"hasNextPage":false,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEQDPAsQA","startCursor":null,"endCursor":null}},"title":"Activity ยท agencyenterprise/autoformalism-with-llms"}