Question about new_instance & new_class benchmark #761
-
Hi! Current Classes: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109] however, when I use nc_benchmark, the print out show Which implementation should I stick to and what should I refer to? Thank you in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hi @ElvishElvis! Your question is indeed very relevant since this may be a confusion point worth clarifying in the documentation in the future. For your specific use-case I'd suggest to use the lower-level "dataset_benchmark" generator that is only partially documented cause it is just a rename of this method. This method accepts a list of AvalancheDatasets where you'll be able to setup eventual "task" labels and are pretty much agnostic to the pytorch dataset content. Alternatively you can use the Please also note that ni_benchmark assumes you want to concatenate the datasets before operating the split. This is why classes are re-mapped sequentially as you can see from your output. Let me know if this solves you issues and clarifies a bit the confusion. |
Beta Was this translation helpful? Give feedback.
-
Copying my previsou question here from slack channel for reference Question: Answer: (by andcos) |
Beta Was this translation helpful? Give feedback.
Hi @ElvishElvis!
Your question is indeed very relevant since this may be a confusion point worth clarifying in the documentation in the future. For your specific use-case I'd suggest to use the lower-level "dataset_benchmark" generator that is only partially documented cause it is just a rename of this method. This method accepts a list of AvalancheDatasets where you'll be able to setup eventual "task" labels and are pretty much agnostic to the pytorch dataset content.
Alternatively you can use the
nc_benchmark
generator as done here for the RotatedMNIST benchmark but keep in mind that thenc_benchmark
was designed with a different objective in mind (split a dataset based on the class inf…